How to mine UCE results of the whole country using Python

Disclaimer: This was written solely for Educational Purposes!

UNEB recently released 2018 UCE results, and to my surprise, they were just thrown in the open. If you come up with any sensible index number, I bet you will be able to return its results at


It gets interesting: For a data scientist like me, this can be a great trove of information to do a hobby project on. I can easily find out the most popular names for babies born in (2002–2003)ish. I can also verify which subject was performed best against media records. If I’m a school looking to make some new hires, this data can come in handy to aid my poaching by identifying which schools performed particular subjects best and then net their teachers.

Project setup:

We shall use Python 3.7, requests and Beautiful soup libraries.

I have tried as much as possible to document everything and add explanatory comments where neccesary regardless of the fact that this code was written in 30-ish minutes.

- Visit the results website in chrome.
- Open dev tools by right-clicking in the browser window and choosing inspect.
- Switch to the network tab of Dev tools
- make a request for an index number
- Copy the curl request from the browser as shown below
- Copy the HTTP params from the request and use them in your code

Screenshot from Chrome dev tools

The copied and pasted Curl request will look like this:

curl '' -H 'cookie: PHPSESSID=adkhagdkagkagjfadfkajdka' -H 'origin:' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en,en-US;q=0.9,fr-FR;q=0.8,fr;q=0.7,ar-EG;q=0.6,ar;q=0.5,my-ZG;q=0.4,my;q=0.3' -H 'user-agent: Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Mobile Safari/537.36' -H 'content-type: application/x-www-form-urlencoded; charset=UTF-8' -H 'accept: */*' -H 'referer:' -H 'authority:' -H 'x-requested-with: XMLHttpRequest' -H 'dnt: 1' --data 'index_no=uXXXX%2FXXX' --compressed

Alternatively, You can convert your copied CURL request into python/node.js code using this service:

And the HTML (we call it soup! 😍) response will look like:

- We make an http post request through python, the data field takes in an index number
- We get back HTML
- We parse the html using BeautifulSoup and split the contents to a dictionary

I tried to change the HTTP headers so that I receive back json in vain.

After scraping, a dictionary result of a single student looks like this:

{'ENG': '4', 'LIT': '7', 'HIS': '4', 'GEO': '5', 'MAT': '6', 'PHY': '6', 'CHE': '7', 'BIO': '6', 'COM': '6', 'CST': '7'}

The full code is available in a gist here:

Any corrections and modifications are fully welcome!

If you want to learn how to do this, These could be Helpful Resources: Requests:
Python data structures:

Legal Disclaimer: The estate of Edison Abahurire is not responsible for any evil doings that individuals may derive out of this project. Stay safe!

I write myself out. Code Chef | Athlete | I Dance and Love travelling. I’m diving into Data Science.

I write myself out. Code Chef | Athlete | I Dance and Love travelling. I’m diving into Data Science.