R | Point-in-polygon, a mathematical cookie-cutter

Point-in-polygon is a textbook problem in geographical analysis: given a list of geocoordinates return those that fall within a boundary of an area. You could feed the algorithm a list of all European cities and it will recognise which of them belong to Sri Lanka and which to a completely random shape you drew on planet Earth. It applies to many scenarios: analyses that aren’t based on administrative boundaries, situations in which polygons change over time, or problems that aren’t geographical at all, like computer graphics. Not so long ago, I turned to point-in-polygon to generate a set of towns and villages to plot on a map of Poland from 1933. Such list has not been made available on the web and I wasn’t super keen on typing out thousands of locations. Instead, I used that mathematical cookie-cutter to extract only those locations from today’s Poland, Ukraine, Belarus, and Russia that were present within the interwar Poland boundaries. In this post I will show how to perform a point-in-polygon analysis in R and possibly automate a significant chunk of data preparation for map visualisations.

Continue reading “R | Point-in-polygon, a mathematical cookie-cutter”

Changing dataset projection with OGR2OGR

Previously we used OGR2OGR to extract a couple of features from a large geographical dataset. OGR2OGR can do so much more – today we’ll look at its reprojecting capabilities. Reprojection is a mathematical translation of a dataset’s coordinate reference system to another one, like Albers to Mercator. Sometimes the geographical data we receive has to be reprojected to conform to our other datasets before further use. My first encounter with mismatching projections came about during the works on my master thesis project. I struggled with a point-in-polygon function that was supposed to filter a set of points based on a geographical boundary and stubbornly just wouldn’t return anything. I soon found out that my map was digitalised in EPSG:3857 (AKA web mercator used by Google maps) projection and my village coordinates used WGS84 coordinate system. That’s how OGR2OGR and me met.

Continue reading “Changing dataset projection with OGR2OGR”

D3.js v5: Promise syntax & examples

Here is my take on promises, the latest addition to D3.js syntax. The other week I attempted to brush off my D3.js skills and got stuck at the most basic task of printing csv data on my html webpage. This is how I learnt that version 5 of D3.js substituted asynchronous callbacks with promises – and  irreversibly changed the way we used to work with data sets. Getting your head around promises can take time, especially if you – like me – aren’t a JavaScript programming pro. In this post I’ll share my lessons learnt and provide some guidance for the ones lost in the world of promises. 

Continue reading “D3.js v5: Promise syntax & examples”

Automate the Boring Stuff with CMD

25 years after being introduced to the Windows’ toolkit, CMD still has it. This post collects a couple of every day file manipulation scenarios that can be accomplished with the command-line interpreter.

Windows’ command prompt is a command-line interface for file and process management on Windows. A big deal in the 90’s, today the tool is not overwhelmingly popular among data scientists, or any Windows user for that matter. But this old school tool still proves useful for basic file manipulation. It might not have the capabilities of, say, Python, but in a situation when you cannot use a programming language or you are looking for a challenge, CMD will always be there for you. Recently, I helped a friend automate some tedious copy and paste operations and reduced his workload by days. Our collaboration is documented below, along with some code snippets.

*The title is a reference to the awesome Automate the Boring Stuff with Python by Al Sweigart.

Continue reading “Automate the Boring Stuff with CMD”

Data Science Conference: Lands of Loyal 2018

I’m just back from Scotland where – besides having some lovely time hanging out with friends – I attended the annual Lands of Loyal (LOL) Data Science conference in Alyth. The event is an informal get together for the Data Science & Business Intelligence graduates from the Dundee University. This time my talk was among those selected for the day: I decided to rework my last blog post on Personally Identifiable Data (and not!) to a 20-minute presentation and packed it with questions (some of which I cannot answer) in regard to our identity on the web.

Continue reading “Data Science Conference: Lands of Loyal 2018”

Right to Explanation: a Right that Never Was (in GDPR)

The conversation around the Right to Explanation reminded me of Mandela Effect. Just as Mandela’s death is believed by many to have happened before his real time of death, Right to Explanation is falsely attributed to GDPR’s collection of laws. An offshoot from early GDPR conversations, the rule has now developed its own literature on the internet. Posts suggesting that the law threatens Artificial Intelligence have flooded Google (examples here, here, and here), while uncertainty-fueled paranoia has taken over LinkedIn. Is it misinformation spread on the internet in its finest or is there more to the discussion? I suggest we review what a Right to Explanation is and why an absent law is causing so much stir on the world wide web.

Continue reading “Right to Explanation: a Right that Never Was (in GDPR)”

About managers who love aggregations a bit too much

Summary: Business instinct | When sums add up | Data-driven decision patching 

This is a story about companies who like aggregations a bit too much. Data-driven decision making seems to be the new holy grail in management, but can the numbers always be trusted? What is key in data-savvy businesses: the people, the right technology, or – spoiler alert – is it something more fundamental? These questions become particularly urgent in the new economy as failing to embrace data can be a major growth impediment or worse, a dead sentence to the business.

Continue reading “About managers who love aggregations a bit too much”

The search is long and the goal is elusive: Data Scientist (Desperately) Wanted

Summary: Wanted: Data Scientist | A bird in the hand is worth two in the bush | A little stir | The infallible art of taking steps back

In this article I will look at how organisations can engineer their own Data Science team without loosing their mind in the process nor spending big money. As more and more companies want to be data-driven, they join the frantic search for the right staff to fuel these initiatives. Finding a data-fluent resource is not easy: according to McKinsey, “by 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.” The hunt is on for a skill set that is still relatively new to the market, and it is only starting to be taught at the universities. My belief is that fishing for a Data Scientist ‘superstar’ is often counterproductive and inevitably leads to a realisation that one person cannot do it all. Instead, investing in appropriate training of the current staff can lead to long-lasting benefits for the company.

Continue reading “The search is long and the goal is elusive: Data Scientist (Desperately) Wanted”