Django Bulk Save - 2 Fast 2 Big
Continuing from Django Bulk Save, I wanted to test how would the copy_from function perform when I scale to over a million records.
The copy_from scales incredibly well, it took 4 seconds to write 1.6 million rows.
The rest of the article strays away from the actual topic.
I decided to play around with different CSV libraries and figure out what would be the most efficient library to use when you have a file this large. You'd probably never encounter a scenario where you'd have to parse a million rows, transform some of them and write them to the database, but it felt like a problem to solve.
I set up two custom APIs, one using python's native csv library and the other using pandas.
The API using pandas was pretty slow compared to the native csv library. I am probably going about this in the wrong way. The copy_from function accepts a csv file anyway, I should have probably used pandas to modify whatever I needed in the csv itself, and pass it to postgres rather than converting it to dict and then recreating the csv. If you probably are dealing with something like this, its best to probably dump this as is to the database, and run a asynchronous job to process this and save it to a different table.
If you also messing around with csvs and pandas, you should definitely check out dask. It performs incredibly well especially when you have low memory. It is incredibly fast when using large datasets and manipulating them compared to pandas, and it does this via parallelism.