import pandas as pd
import country_converter as cc
22 Joining 2: One-to-Many, Multi-Key Joins & Key Mismatches
22.1 Packages
22.2 Data
Run the code below to load and define the datasets to be used in this lesson.
# Load datasets
= pd.read_csv(
oil_consumption "https://raw.githubusercontent.com/the-graph-courses/idap_book/main/data/oil_consumption.csv"
)= pd.read_csv(
tidyr_population "https://raw.githubusercontent.com/the-graph-courses/idap_book/main/data/tidyr_population.csv"
)= pd.read_csv(
country_regions "https://raw.githubusercontent.com/the-graph-courses/idap_book/main/data/country_continent_data.csv"
)
= (
oil_2012 "year"] == 2012].copy().drop(columns=["year"])
oil_consumption[oil_consumption[
)
# people data
= pd.DataFrame({"name": ["Alice", "Bob", "Charlie"], "age": [25, 32, 45]})
people
= pd.DataFrame(
test_info_many
{"name": ["Alice", "Alice", "Bob", "Bob", "Charlie", "Charlie"],
"test_date": [
"2023-06-05",
"2023-06-10",
"2023-08-10",
"2023-05-02",
"2023-05-12",
"2023-05-15",
],"result": [
"Negative",
"Positive",
"Positive",
"Negative",
"Negative",
"Negative",
],
}
)
= pd.DataFrame(
farm_info
{"farm_id": [1, 2, 3],
"farm_name": ["Green Acres", "Harvest Hill", "Golden Fields"],
"location": ["County A", "County B", "County A"],
}
)
= pd.DataFrame(
crop_yields
{"farm_id": [1, 1, 2, 3, 3],
"crop": ["Wheat", "Corn", "Soybeans", "Wheat", "Barley"],
"yield_tons": [50, 60, 45, 55, 30],
}
)
= pd.DataFrame(
traffic_flow
{"street_name": [
"Main St",
"Main St",
"Broadway",
"Broadway",
"Elm St",
"Elm St",
],"time_of_day": ["9am", "2pm", "9am", "2pm", "9am", "2pm"],
"vehicle_count": [1200, 900, 1500, 1100, 700, 600],
}
)
= pd.DataFrame(
pollution_levels
{"street_name": [
"Main St",
"Main St",
"Broadway",
"Broadway",
"Elm St",
"Elm St",
],"time_of_day": ["9am", "2pm", "9am", "2pm", "9am", "2pm"],
"pm_2_5_level": [35.5, 42.1, 40.3, 48.2, 25.7, 30.9],
}
)
= pd.DataFrame(
test_info_diff
{"name": ["alice", "Bob", "Charlie "],
"test_date": ["2023-06-05", "2023-08-10", "2023-05-02"],
"result": ["Negative", "Positive", "Negative"],
}
)
= pd.DataFrame(
asia_countries
{"Country": ["India", "Indonesia", "Philippines"],
"Capital": ["New Delhi", "Jakarta", "Manila"],
}
)
= pd.DataFrame(
asia_population
{"Country": ["India", "indonesia", "Philipines"],
"Population": [1393000000, 273500000, 113000000],
"Life_Expectancy": [69.7, 71.7, 72.7],
} )
22.3 Introduction
Now that we have a solid grasp on the different types of joins and how they work, we can look at how to manage more complex joins and messier data.
22.4 Learning Objectives
You understand the concept of a one-to-many join
You know how to join on multiple key columns
You know how to check for mismatched values between dataframes
22.5 One-to-many joins
So far, we have primarily looked at one-to-one joins, where an observation in one dataframe corresponded to only one observation in the other dataframe. In a one-to-many join, an observation in one dataframe corresponds to multiple observations in the other dataframe.
To illustrate a one-to-many join, let’s return to our patients and their COVID test data. Let’s imagine that in our dataset, Alice
and Xavier
got tested multiple times for COVID. We can add two more rows to our test_info
dataframe with their new test information:
people
name | age | |
---|---|---|
0 | Alice | 25 |
1 | Bob | 32 |
2 | Charlie | 45 |
test_info_many
name | test_date | result | |
---|---|---|---|
0 | Alice | 2023-06-05 | Negative |
1 | Alice | 2023-06-10 | Positive |
2 | Bob | 2023-08-10 | Positive |
3 | Bob | 2023-05-02 | Negative |
4 | Charlie | 2023-05-12 | Negative |
5 | Charlie | 2023-05-15 | Negative |
Next, let’s take a look at what happens when we use a merge()
with people
as the left dataframe:
="name", how="left") pd.merge(people, test_info_many, on
name | age | test_date | result | |
---|---|---|---|---|
0 | Alice | 25 | 2023-06-05 | Negative |
1 | Alice | 25 | 2023-06-10 | Positive |
2 | Bob | 32 | 2023-08-10 | Positive |
3 | Bob | 32 | 2023-05-02 | Negative |
4 | Charlie | 45 | 2023-05-12 | Negative |
5 | Charlie | 45 | 2023-05-15 | Negative |
What’s happened above? Basically, when you perform a one-to-many join, the data from the “one” side are duplicated for each matching row of the “many” side.
22.6 Practice Q: Merging One-to-Many Crop Yields
Run the code below to print the two small dataframes:
farm_info
farm_id | farm_name | location | |
---|---|---|---|
0 | 1 | Green Acres | County A |
1 | 2 | Harvest Hill | County B |
2 | 3 | Golden Fields | County A |
crop_yields
farm_id | crop | yield_tons | |
---|---|---|---|
0 | 1 | Wheat | 50 |
1 | 1 | Corn | 60 |
2 | 2 | Soybeans | 45 |
3 | 3 | Wheat | 55 |
4 | 3 | Barley | 30 |
If you use a merge()
to join these datasets, how many rows will be in the final dataframe? Try to figure it out and then perform the join to see if you were right.
22.7 Multiple Key Columns
Sometimes we have more than one column that uniquely identifies the observations that we want to match on. For example, let’s imagine we have traffic flow data for three streets at two different times of day: 9am and 2pm.
traffic_flow
street_name | time_of_day | vehicle_count | |
---|---|---|---|
0 | Main St | 9am | 1200 |
1 | Main St | 2pm | 900 |
2 | Broadway | 9am | 1500 |
3 | Broadway | 2pm | 1100 |
4 | Elm St | 9am | 700 |
5 | Elm St | 2pm | 600 |
Now, let’s imagine we have another dataset for the same three streets, recording air pollution levels (measured in particulate matter, PM2.5) during the same times of day.
pollution_levels
street_name | time_of_day | pm_2_5_level | |
---|---|---|---|
0 | Main St | 9am | 35.5 |
1 | Main St | 2pm | 42.1 |
2 | Broadway | 9am | 40.3 |
3 | Broadway | 2pm | 48.2 |
4 | Elm St | 9am | 25.7 |
5 | Elm St | 2pm | 30.9 |
We want to join the two datasets so that each street has two rows: one for the 9am time point and one for the 2pm time point. To do this, our first instinct may be to join the datasets only on street_name
. Let’s try it out and see what happens:
="street_name", how="left") pd.merge(traffic_flow, pollution_levels, on
street_name | time_of_day_x | vehicle_count | time_of_day_y | pm_2_5_level | |
---|---|---|---|---|---|
0 | Main St | 9am | 1200 | 9am | 35.5 |
1 | Main St | 9am | 1200 | 2pm | 42.1 |
2 | Main St | 2pm | 900 | 9am | 35.5 |
3 | Main St | 2pm | 900 | 2pm | 42.1 |
4 | Broadway | 9am | 1500 | 9am | 40.3 |
5 | Broadway | 9am | 1500 | 2pm | 48.2 |
6 | Broadway | 2pm | 1100 | 9am | 40.3 |
7 | Broadway | 2pm | 1100 | 2pm | 48.2 |
8 | Elm St | 9am | 700 | 9am | 25.7 |
9 | Elm St | 9am | 700 | 2pm | 30.9 |
10 | Elm St | 2pm | 600 | 9am | 25.7 |
11 | Elm St | 2pm | 600 | 2pm | 30.9 |
As we can see, this isn’t what we wanted at all! We end up with duplicated rows—now we have four rows for each street.
What we want to do is match on BOTH street_name
AND time_of_day
. To do this, we need to tell Python to match on two columns by specifying both column names in a list.
=["street_name", "time_of_day"]) pd.merge(traffic_flow, pollution_levels, on
street_name | time_of_day | vehicle_count | pm_2_5_level | |
---|---|---|---|---|
0 | Main St | 9am | 1200 | 35.5 |
1 | Main St | 2pm | 900 | 42.1 |
2 | Broadway | 9am | 1500 | 40.3 |
3 | Broadway | 2pm | 1100 | 48.2 |
4 | Elm St | 9am | 700 | 25.7 |
5 | Elm St | 2pm | 600 | 30.9 |
Now we have the correct number of rows! We can directly see the vehicle count and PM2.5 level for each street at each time of day.
22.8 Practice Q: Calculate Oil Consumption per Capita
We have two datasets containing information about countries:
oil_consumption
: Contains yearly oil consumption in tonnestidyr_population
: Contains yearly population data
# View the datasets
=["country", "year"]) oil_consumption.sort_values(by
country | year | oil_consump | |
---|---|---|---|
19 | Algeria | 1995 | 8430000 |
98 | Algeria | 1996 | 8060000 |
177 | Algeria | 1997 | 7990000 |
256 | Algeria | 1998 | 8220000 |
335 | Algeria | 1999 | 8110000 |
... | ... | ... | ... |
1183 | Vietnam | 2009 | 14200000 |
1262 | Vietnam | 2010 | 15300000 |
1341 | Vietnam | 2011 | 16700000 |
1420 | Vietnam | 2012 | 17000000 |
1499 | Vietnam | 2013 | 18200000 |
1501 rows × 3 columns
=["country", "year"]) tidyr_population.sort_values(by
country | year | population | |
---|---|---|---|
0 | Afghanistan | 1995 | 17586073 |
1 | Afghanistan | 1996 | 18415307 |
2 | Afghanistan | 1997 | 19021226 |
3 | Afghanistan | 1998 | 19496836 |
4 | Afghanistan | 1999 | 19987071 |
... | ... | ... | ... |
4040 | Zimbabwe | 2009 | 12888918 |
4041 | Zimbabwe | 2010 | 13076978 |
4042 | Zimbabwe | 2011 | 13358738 |
4043 | Zimbabwe | 2012 | 13724317 |
4044 | Zimbabwe | 2013 | 14149648 |
4045 rows × 3 columns
Join these datasets using
merge()
with a left join. Since we want to match both country AND year, you’ll need to join on multiple columns. (You may notice that not all rows are matched. You can ignore this for now.)After joining, create a new column called
consumption_per_capita
that calculates the yearly oil consumption per person (in tonnes).Which country had the highest per capita oil consumption in 1995?
22.9 Key Mismatches
Often you will need to pre-clean your data when you draw it from different sources before you’re able to join it. This is because there can be inconsistencies in ways that values are recorded.
To illustrate this, let’s return to our mock patient data from the first lesson. If you recall, we had two dataframes, one called people
and the other called test_info
. We can recreate these datasets but change Alice
to alice
in the test_info_diff
dataset and keep all other values the same.
people
name | age | |
---|---|---|
0 | Alice | 25 |
1 | Bob | 32 |
2 | Charlie | 45 |
test_info_diff
name | test_date | result | |
---|---|---|---|
0 | alice | 2023-06-05 | Negative |
1 | Bob | 2023-08-10 | Positive |
2 | Charlie | 2023-05-02 | Negative |
Now let’s try a merge()
on our two datasets.
='name', how='left') people.merge(test_info_diff, on
name | age | test_date | result | |
---|---|---|---|---|
0 | Alice | 25 | NaN | NaN |
1 | Bob | 32 | 2023-08-10 | Positive |
2 | Charlie | 45 | NaN | NaN |
="name", how="inner") pd.merge(people, test_info_diff, on
name | age | test_date | result | |
---|---|---|---|---|
0 | Bob | 32 | 2023-08-10 | Positive |
As we can see, Python didn’t recognize Alice
and alice
as the same person, and it also could not match Charlie
and Charlie
! So we lose Alice
and Charlie
in the left
join, and they are dropped in the inner
join.
How can we fix this? We need to ensure that the names in both datasets are in the same format. For this, we can use str.title()
to capitalize the first letter of each name.
'name'] = test_info_diff['name'].str.title()
test_info_diff[ test_info_diff
name | test_date | result | |
---|---|---|---|
0 | Alice | 2023-06-05 | Negative |
1 | Bob | 2023-08-10 | Positive |
2 | Charlie | 2023-05-02 | Negative |
='name', how='inner') people.merge(test_info_diff, on
name | age | test_date | result | |
---|---|---|---|---|
0 | Alice | 25 | 2023-06-05 | Negative |
1 | Bob | 32 | 2023-08-10 | Positive |
Hmm, Charlie is still not matched. It’s hard to see from the printout, but the string Charlie
in test_info_diff
has an extra space at the end.
We can spot this better by using .unique()
to convert to an array:
'name'].unique() test_info_diff[
array(['Alice', 'Bob', 'Charlie '], dtype=object)
We can fix this by using str.strip()
to remove the extra space.
'name'] = test_info_diff['name'].str.strip()
test_info_diff[ test_info_diff
name | test_date | result | |
---|---|---|---|
0 | Alice | 2023-06-05 | Negative |
1 | Bob | 2023-08-10 | Positive |
2 | Charlie | 2023-05-02 | Negative |
Now we can join the two datasets:
='name', how='inner') people.merge(test_info_diff, on
name | age | test_date | result | |
---|---|---|---|---|
0 | Alice | 25 | 2023-06-05 | Negative |
1 | Bob | 32 | 2023-08-10 | Positive |
2 | Charlie | 45 | 2023-05-02 | Negative |
Perfect!
22.10 Practice Q: Inner Join Countries
The following two datasets contain data for India, Indonesia, and the Philippines. However, an inner
join of these datasets only returns 1 row.
asia_countries
Country | Capital | |
---|---|---|
0 | India | New Delhi |
1 | Indonesia | Jakarta |
2 | Philippines | Manila |
asia_population
Country | Population | Life_Expectancy | |
---|---|---|---|
0 | India | 1393000000 | 69.7 |
1 | indonesia | 273500000 | 71.7 |
2 | Philipines | 113000000 | 72.7 |
pd.merge(asia_countries, asia_population)
Country | Capital | Population | Life_Expectancy | |
---|---|---|---|---|
0 | India | New Delhi | 1393000000 | 69.7 |
What are the differences between the values in the key columns that would have to be changed before joining the datasets? Pay attention to capitalization and spelling.
Now, fix the mismatched values in the Country
column and try the join again.
22.11 Key Mismatches: Oil Consumption Example
Let’s now see a more realistic example of how mismatched keys can cause problems.
oil_consumption
country | year | oil_consump | |
---|---|---|---|
0 | United Arab Emirates | 1995 | 20800000 |
1 | Argentina | 1995 | 20300000 |
2 | Australia | 1995 | 36500000 |
3 | Austria | 1995 | 11300000 |
4 | Azerbaijan | 1995 | 6580000 |
... | ... | ... | ... |
1496 | USA | 2013 | 791000000 |
1497 | Uzbekistan | 2013 | 2860000 |
1498 | Venezuela | 2013 | 36800000 |
1499 | Vietnam | 2013 | 18200000 |
1500 | South Africa | 2013 | 26700000 |
1501 rows × 3 columns
tidyr_population
country | year | population | |
---|---|---|---|
0 | Afghanistan | 1995 | 17586073 |
1 | Afghanistan | 1996 | 18415307 |
2 | Afghanistan | 1997 | 19021226 |
3 | Afghanistan | 1998 | 19496836 |
4 | Afghanistan | 1999 | 19987071 |
... | ... | ... | ... |
4040 | Zimbabwe | 2009 | 12888918 |
4041 | Zimbabwe | 2010 | 13076978 |
4042 | Zimbabwe | 2011 | 13358738 |
4043 | Zimbabwe | 2012 | 13724317 |
4044 | Zimbabwe | 2013 | 14149648 |
4045 rows × 3 columns
After we attempt a join, we see that there are some countries that are not matched, such as Vietnam.
pd.merge(=["country", "year"], how="left"
oil_consumption, tidyr_population, on"country", "year"]) ).sort_values([
country | year | oil_consump | population | |
---|---|---|---|---|
19 | Algeria | 1995 | 8430000 | 29315463.0 |
98 | Algeria | 1996 | 8060000 | 29845208.0 |
177 | Algeria | 1997 | 7990000 | 30345466.0 |
256 | Algeria | 1998 | 8220000 | 30820435.0 |
335 | Algeria | 1999 | 8110000 | 31276295.0 |
... | ... | ... | ... | ... |
1183 | Vietnam | 2009 | 14200000 | NaN |
1262 | Vietnam | 2010 | 15300000 | NaN |
1341 | Vietnam | 2011 | 16700000 | NaN |
1420 | Vietnam | 2012 | 17000000 | NaN |
1499 | Vietnam | 2013 | 18200000 | NaN |
1501 rows × 4 columns
This is because the country names are not in the same format in the two datasets.
Before attempting to join these datasets, it’s a good idea to check for mismatches in the key columns. This can help you identify any discrepancies that might prevent a successful join.
First, let’s identify the unique country names in both datasets.
= set(oil_consumption['country'].unique())
oil_countries = set(tidyr_population['country'].unique()) pop_countries
Now, to find countries in oil_consumption
that are not in tidyr_population
, we can use set arithmetic:
= oil_countries - pop_countries
missing_in_pop missing_in_pop
{'Hong Kong, China',
'Iran',
'North Macedonia',
'Russia',
'Slovak Republic',
'South Korea',
'Taiwan',
'UK',
'USA',
'Venezuela',
'Vietnam'}
And countries in tidyr_population
that are not in oil_consumption
:
= pop_countries - oil_countries
missing_in_oil missing_in_oil
{'Afghanistan',
'Albania',
'American Samoa',
'Andorra',
'Angola',
'Anguilla',
'Antigua and Barbuda',
'Armenia',
'Aruba',
'Bahamas',
'Bahrain',
'Barbados',
'Belize',
'Benin',
'Bermuda',
'Bhutan',
'Bolivia (Plurinational State of)',
'Bonaire, Saint Eustatius and Saba',
'Bosnia and Herzegovina',
'Botswana',
'British Virgin Islands',
'Brunei Darussalam',
'Burkina Faso',
'Burundi',
'Cabo Verde',
'Cambodia',
'Cameroon',
'Cayman Islands',
'Central African Republic',
'Chad',
'China, Hong Kong SAR',
'China, Macao SAR',
'Comoros',
'Congo',
'Cook Islands',
'Costa Rica',
'Cuba',
'Curaçao',
"Côte d'Ivoire",
"Democratic People's Republic of Korea",
'Democratic Republic of the Congo',
'Djibouti',
'Dominica',
'Dominican Republic',
'El Salvador',
'Equatorial Guinea',
'Eritrea',
'Ethiopia',
'Fiji',
'French Polynesia',
'Gabon',
'Gambia',
'Georgia',
'Ghana',
'Greenland',
'Grenada',
'Guam',
'Guatemala',
'Guinea',
'Guinea-Bissau',
'Guyana',
'Haiti',
'Honduras',
'Iran (Islamic Republic of)',
'Jamaica',
'Jordan',
'Kenya',
'Kiribati',
'Kyrgyzstan',
"Lao People's Democratic Republic",
'Lebanon',
'Lesotho',
'Liberia',
'Libya',
'Madagascar',
'Malawi',
'Maldives',
'Mali',
'Malta',
'Marshall Islands',
'Mauritania',
'Mauritius',
'Micronesia (Federated States of)',
'Monaco',
'Mongolia',
'Montenegro',
'Montserrat',
'Mozambique',
'Myanmar',
'Namibia',
'Nauru',
'Nepal',
'New Caledonia',
'Nicaragua',
'Niger',
'Nigeria',
'Niue',
'Northern Mariana Islands',
'Palau',
'Panama',
'Papua New Guinea',
'Paraguay',
'Puerto Rico',
'Republic of Korea',
'Republic of Moldova',
'Russian Federation',
'Rwanda',
'Saint Kitts and Nevis',
'Saint Lucia',
'Saint Vincent and the Grenadines',
'Samoa',
'San Marino',
'Sao Tome and Principe',
'Senegal',
'Serbia',
'Seychelles',
'Sierra Leone',
'Sint Maarten (Dutch part)',
'Slovakia',
'Solomon Islands',
'Somalia',
'South Sudan',
'Sudan',
'Suriname',
'Swaziland',
'Syrian Arab Republic',
'Tajikistan',
'The Former Yugoslav Republic of Macedonia',
'Timor-Leste',
'Togo',
'Tokelau',
'Tonga',
'Tunisia',
'Turks and Caicos Islands',
'Tuvalu',
'US Virgin Islands',
'Uganda',
'United Kingdom of Great Britain and Northern Ireland',
'United Republic of Tanzania',
'United States of America',
'Uruguay',
'Vanuatu',
'Venezuela (Bolivarian Republic of)',
'Viet Nam',
'Wallis and Futuna Islands',
'West Bank and Gaza Strip',
'Yemen',
'Zambia',
'Zimbabwe'}
These differences indicate mismatches in the key columns that need to be addressed before joining.
You might try to check manually. For example, we can see that Vietname is written as Vietnam
in one dataset and Viet Nam
in the other.
However, in the case of countries, there is an even nicer solution: use country codes! We’ll see how to do this in the next section.
22.12 Set Arithmetic
A quick side note on set arithmetic for those who are unfamiliar.
Consider two sets of the numbers 1:5, and 2:4.
= set([1, 2, 3, 4, 5])
set_1 = set([2, 3, 4]) set_2
We can check the values in set_1
that are not in set_2
by using set arithmetic:
- set_2 set_1
{1, 5}
And the values in set_2
that are not in set_1
by using:
- set_1 set_2
set()
22.12.1 Merging with Country Codes
To avoid country mismatches, it is often useful to use country codes rather than country names as the key.
Let’s now add country codes to both datasets and try the join again.
# How to use country_converter
"Nigeria", to='ISO3') cc.convert(
'NGA'
'country_code'] = cc.convert(oil_consumption['country'], to='ISO3')
oil_consumption['country_code'] = cc.convert(tidyr_population['country'], to='ISO3') tidyr_population[
= oil_consumption.merge(tidyr_population, on=['country_code', 'year'], how='left') oil_pop_code
22.12.2 Identifying Remaining Mismatches
Let’s see which countries still failed to find a match:
set(oil_pop_code['country_code'].unique()) - set(tidyr_population['country_code'].unique())
{'TWN'}
It seems ‘TWN’ (Taiwan) failed to find a match. We can manually look through the tidyr_population
dataset to see if we can find it.
"country.str.contains('Taiwan')") tidyr_population.query(
country | year | population | country_code |
---|
Just in case there is a mismatch in capitalization, we can also check for ‘taiwan’:
"country.str.contains('taiwan')") tidyr_population.query(
country | year | population | country_code |
---|
And we can check for ‘China’ since there is currently conflict over whether Taiwan is part of China.
"country.str.contains('China')") tidyr_population.query(
country | year | population | country_code | |
---|---|---|---|---|
783 | China | 1995 | 1237531429 | CHN |
784 | China | 1996 | 1247897092 | CHN |
785 | China | 1997 | 1257021784 | CHN |
786 | China | 1998 | 1265222536 | CHN |
787 | China | 1999 | 1272915272 | CHN |
788 | China | 2000 | 1280428583 | CHN |
789 | China | 2001 | 1287890449 | CHN |
790 | China | 2002 | 1295322020 | CHN |
791 | China | 2003 | 1302810258 | CHN |
792 | China | 2004 | 1310414386 | CHN |
793 | China | 2005 | 1318176835 | CHN |
794 | China | 2006 | 1326146433 | CHN |
795 | China | 2007 | 1334343509 | CHN |
796 | China | 2008 | 1342732604 | CHN |
797 | China | 2009 | 1351247555 | CHN |
798 | China | 2010 | 1359821465 | CHN |
799 | China | 2011 | 1368440300 | CHN |
800 | China | 2012 | 1377064907 | CHN |
801 | China | 2013 | 1385566537 | CHN |
802 | China, Hong Kong SAR | 1995 | 6144498 | HKG |
803 | China, Hong Kong SAR | 1996 | 6275363 | HKG |
804 | China, Hong Kong SAR | 1997 | 6430651 | HKG |
805 | China, Hong Kong SAR | 1998 | 6591717 | HKG |
806 | China, Hong Kong SAR | 1999 | 6732627 | HKG |
807 | China, Hong Kong SAR | 2000 | 6835301 | HKG |
808 | China, Hong Kong SAR | 2001 | 6892752 | HKG |
809 | China, Hong Kong SAR | 2002 | 6912079 | HKG |
810 | China, Hong Kong SAR | 2003 | 6906631 | HKG |
811 | China, Hong Kong SAR | 2004 | 6896523 | HKG |
812 | China, Hong Kong SAR | 2005 | 6896686 | HKG |
813 | China, Hong Kong SAR | 2006 | 6910671 | HKG |
814 | China, Hong Kong SAR | 2007 | 6934748 | HKG |
815 | China, Hong Kong SAR | 2008 | 6967866 | HKG |
816 | China, Hong Kong SAR | 2009 | 7006930 | HKG |
817 | China, Hong Kong SAR | 2010 | 7049514 | HKG |
818 | China, Hong Kong SAR | 2011 | 7096359 | HKG |
819 | China, Hong Kong SAR | 2012 | 7148493 | HKG |
820 | China, Hong Kong SAR | 2013 | 7203836 | HKG |
821 | China, Macao SAR | 1995 | 398459 | MAC |
822 | China, Macao SAR | 1996 | 405231 | MAC |
823 | China, Macao SAR | 1997 | 412031 | MAC |
824 | China, Macao SAR | 1998 | 418810 | MAC |
825 | China, Macao SAR | 1999 | 425448 | MAC |
826 | China, Macao SAR | 2000 | 431907 | MAC |
827 | China, Macao SAR | 2001 | 438080 | MAC |
828 | China, Macao SAR | 2002 | 444150 | MAC |
829 | China, Macao SAR | 2003 | 450711 | MAC |
830 | China, Macao SAR | 2004 | 458542 | MAC |
831 | China, Macao SAR | 2005 | 468149 | MAC |
832 | China, Macao SAR | 2006 | 479808 | MAC |
833 | China, Macao SAR | 2007 | 493206 | MAC |
834 | China, Macao SAR | 2008 | 507528 | MAC |
835 | China, Macao SAR | 2009 | 521617 | MAC |
836 | China, Macao SAR | 2010 | 534626 | MAC |
837 | China, Macao SAR | 2011 | 546278 | MAC |
838 | China, Macao SAR | 2012 | 556783 | MAC |
839 | China, Macao SAR | 2013 | 566375 | MAC |
It seems that Taiwan is not in the tidyr_population
dataset.
In such a case, you might then try to find a dataset containing population data for Taiwan and add it to the tidyr_population
dataset. But we’ll leave this for you to figure out.
22.13 Practice Q: Merging Oil Consumption with Geographic Data
Run the code to view the two datasets.
The first, oil_2012
, records the oil consumption for the year 2012:
oil_2012
country | oil_consump | |
---|---|---|
1343 | United Arab Emirates | 35200000 |
1344 | Argentina | 28600000 |
1345 | Australia | 46100000 |
1346 | Austria | 11900000 |
1347 | Azerbaijan | 4170000 |
... | ... | ... |
1417 | USA | 778000000 |
1418 | Uzbekistan | 3030000 |
1419 | Venezuela | 37200000 |
1420 | Vietnam | 17000000 |
1421 | South Africa | 26300000 |
79 rows × 2 columns
And country_regions
lists countries along with their respective regions and continents:
country_regions
country_name | country_code | continent | |
---|---|---|---|
0 | Afghanistan | AFG | Asia |
1 | Albania | ALB | Europe |
2 | Algeria | DZA | Africa |
3 | American Samoa | ASM | Oceania |
4 | Andorra | AND | Europe |
... | ... | ... | ... |
237 | Western Sahara | ESH | Africa |
238 | Yemen | YEM | Asia |
239 | Zambia | ZMB | Africa |
240 | Zimbabwe | ZWE | Africa |
241 | Åland Islands | ALA | Europe |
242 rows × 3 columns
Join the two datasets using the country codes as the key. Then find the countries with the highest oil consumption in each continent. As a sanity check, your answer should include the US & China.
'country_code'] = cc.convert(oil_2012['country'], to='ISO3')
oil_2012[
= oil_2012.merge(country_regions, on='country_code', how='left')
oil_2012_regions
= oil_2012_regions.loc[
max_oil_by_continent 'continent')['oil_consump'].idxmax()
oil_2012_regions.groupby(
]
'country', 'continent', 'oil_consump']] max_oil_by_continent[[
country | continent | oil_consump | |
---|---|---|---|
21 | Egypt | Africa | 35300000 |
74 | USA | Americas | 778000000 |
13 | China | Asia | 484000000 |
62 | Russia | Europe | 145000000 |
2 | Australia | Oceania | 46100000 |
23 Wrap Up!
In this lesson, we explored several advanced concepts related to joining dataframes in Python:
One-to-Many Relationships: We learned how joins work when one observation in a dataframe corresponds to multiple observations in another dataframe, and how the data from the “one” side gets duplicated for each matching row on the “many” side.
Multi-Key Joins: We discovered how to join dataframes using multiple columns as keys (like combining street names with time of day), which is essential for maintaining data integrity when a single column isn’t enough to uniquely identify observations.
Key Mismatches: We explored common challenges in joining data from different sources, including:
- Case sensitivity issues (e.g., “Alice” vs “alice”)
- Trailing spaces
- Spelling variations
- Using standardized codes (like country codes) to ensure consistent matching
These skills are crucial for real-world data analysis, where data often comes from multiple sources and requires careful cleaning and preparation before it can be effectively combined.