A recipe proved to solve every* VM’s proxy problems

Summary: Intro | Obtaining the proxy server address | Configuring proxy settings – checklist | Other tools

*Fedora / RHEL / CentOS

At first I wanted to title this post as Welcome to Proxy Hell, because – at least at first – getting the proxy settings right on a VM can feel like a nightmare. Especially, if you have no idea about were to start or, more depressingly, when none of your attempts to fix the problem seem to be successful. Nearly inevitably, if working in an office, you have come across proxies. It has become a standard for companies to guard their network traffic with a proxy server. The idea is that the server acts as an intermediary between the private company network and the internet, which both hides the web traffic from the outside eyes and can serve as a base for implementing access authentication and bandwidth control.

Perhaps the first time you consciously acknowledge the presence of a proxy is when your browser’s homepage instead of directing you to Google.co.uk goes to Google.in or Google.pl, and the Search button shows up in a different language than expected. That’s your proxy server location that’s just fooled Google. The second time you come across proxies is less amusing: this is when you start working with, or worse, configuring Virtual Environments and realise that even the basic tasks, like accessing a webpage or installing a package don’t work. For instance, if you followed the Hadoop clustering guide from my last post in an office environment you wouldn’t have been able to get most of it working it without setting up a proxy. Yet, the guide conveniently skips that topic with a vague warning: make sure you’re not behind a proxy. So, what to do if you were? Continue reading “A recipe proved to solve every* VM’s proxy problems”

Your first DIY Hadoop cluster

Summary: Intro | Linux VM Setup | VM Networking | Extending a Hadoop Cluster

At times I wish I had started my journey with Big Data earlier so that I could enter the market in 2008-2009. Though Hadoopmania is still going strong in IT, these years were a gold era for Hadoop professionals. With any sort of Hadoop experience you could be considered for a £80,000 position. There was such shortage of Hadoop skills in the job market that even a complete beginner could land a wonderfully overpaid job. Today you can’t just wing it at the interview; the market has matured and there are many talented and qualified people pursuing careers in Big Data. That said, after years, the demand for Hadoop knowledge is still on the rise, making it a profitable career choice for the foreseeable future.

Hadoop Salaries in the UK (Source: IT Jobs Watch)

screenshot200

While these days there seem to be a separation between analyst and administrator/developer roles on the market, I am of opinion that either role has to be aware of the objectives of the other. That is: an analyst should understand the workings of a Hadoop cluster, just as a developer needs to understand the demand an analysis will put on the worker nodes. It’s very similar to a skilled Business Intelligence specialist that appreciates the impact a database design has on the speed of query processing and the availability of the system. That philosophy is the why behind this post: getting to know Hadoop by configuring a cluster yourself. You could be creating a cluster simply because you want to see how it’s done, or perhaps you are looking to extend the processing power of your system by an extra server. Continue reading “Your first DIY Hadoop cluster”

My computer AKA my first big data machine

Summary: Intro | Virtualisation Software | Cloudera’s QuickStart VM | Importing a VM

In this post, I will introduce Virtual Machines: the core platform of every data scientist. If you, like me, get to experiment with different technologies at work, you are familiar with Virtual Machines. VMs are the best way of getting to test something out without having to install it on your computer and risking messing up your working environment. In its essence, a VM is like a mini (virtual!) computer you put on your computer; that computer has its own environment, like Windows, Linux or MacOS, and it would usually come with a bunch of pre-installed and configured tools, so that you don’t have too worry about any (or much) setup. So you might have a Windows machine installed on your actual Windows machine, and while these two share computing resources and space, they are separate instances of Windows. Plus, the virtual machine you can delete or change as you please, you can have many and, by definition, this has no impact on your original working environment.

Continue reading “My computer AKA my first big data machine”

Hello World

A traditional first post, then!

I want this space to become a journal of my wanderings in the world of data analysis. While I’ve been working as a consultant for a few years now, there is a magnitude of topics I have never tackled and technologies I know nothing about. The idea for this learning space is to tackle a different problem every week-two weeks.

I am no writer, and my previous experience is in creating functional specification where every term had to be precise, and the sentences kept short and simple. In my Business Analyst beginnings the documentation I produced was poor, but with time I saw my writing quality go up. With that in mind I think it will be a learning curve to learn ‘blogging’ (but I still keep the positive outlook).

Thanks,

Eve the Analyst