The Poor Man escapes dependency hell.

Yash bhambhu
5 min readJun 28, 2021

This blog is written to document my GSoC journey on the project PoorMan’s Rekognition.

The stage is set, and now it's time to embellish the poor man. In the community bonding period, I took a quick revision of my promised deliverables in my proposal. I had a brief discussion with my mentor Pulkit Mishra about what should be the boilerplate state of the project upon commencement of the program.

Our discussion amounted to two major chunks, which are as follows:-

  1. Fixing CI pipeline-: The project uses Travis CI for testing. While I had a solid base already laid out for me thanks to the previous year’s work but there wasn’t a successful build on Travis in ages. Reasons being lacking appropriate Django settings while initiating settings and improperly configured YAML file. After that, I translated all the image tests to their video complements. Pulkit suggested it would be convenient if we could test changes locally before committing now. I kept this valuable piece of advice, and I added a test.sh file for offline testing. It would have saved me a few hours of frustration while I was fixing the CI pipeline, as the overhead time of Travis may be a few minutes, but it can be a bummer when you commit in short and quick bursts. The part the took the most toll on my head was figuring out it was missing Django settings initialization sabotaging testing cause of a highly misleading error; speaking of reporting accurate errors, our project wasn't winning any medals in this turf hence next point.
  2. Fixing Error messages-: Much like GoT’s Hodor, the APIs were very limited in their vocabulary when it came to reporting errors. All they had to log is “Facial Expression Recognition Not Working,” But I have replaced them with appropriate ones now, so happy debugging:)

Before the program started, I was determined to follow the chronology that appeared in my original proposal, but that determination went south upon the very first day.

The first two weeks of Google Summer of Code were allocated to do the project at peace with all the versions of python(3.6+) and upgrade it to tensorflow2.0, replacing its archaic counterpart. Now the first week was allocated to three major updates-:

  1. Adding python3.7 and 3.8 support
  2. Adding a pip package for a special dependency “LANMS.”
  3. resolving dependencies and updating requirements

First of all, I would like to acknowledge yes, python 3.4–10 updates are weird and were implemented under the devil’s influence. It is very acceptable and expected that things break down when there is a major release like updating from python 2 to python 3. Still, many libraries would outright refuse to serve for any random reason if you change the subversion in python 3.4+. So there were many version conflicts. Some of them would work on python 3.8 but not on 3.6 and vice-versa. The reader might think it is an obvious step, not to mention the version while installing the library and let the PIP manager decide what is suitable. This is not only a terrible coding practice but not practical sometimes. The functions and methods get updated in libraries. It is widespread in updates falling in this python range, but luckily we different concerns as we could pin down almost all of the dependencies by binary searching the versions to and fro except two: LANMS and NumPy.

NumPy’s immediate versions refused to have a treaty between version 3.8 and 3.6, so I left it at the mercy of the PIP version handler as no syntax was changed, and the amount of work dependent on this well-established syntax won't make it a wise decision to change it in future, so hopefully, we are covered on that front. Now comes a funnier problem. Imagine a package that would install on all versions but would work on only one—a good example of poor packaging of python packages. One must always mention the python-requirement for the release version if we don't intend to create a catastrophe. LANMS is a textbook example for this instance. It was intended to serve as a helper for a decent implementation of EAST(will be discussed in further blogs), and it served quite well for the original project. What makes this package unique is that it is a compiled binary in Cython and implementation was such that the same code could be compiled for a specific subversion of python. The compiled version was hosted on PiPy without restricting it to the subversion it was compiled into to install on any python 3 but would give an error while importing. I compiled it for all the subversions and made a PiPy package specifying python requirements for each, and the problem was solved. But the mess didn’t end here now. This is the part that reveals why I have to intermingle tasks from the first two weeks.
In the second week, I proposed two major tasks:

  1. Upgrading code to TF2.x
  2. Setting up a dev environment for macOS

The latter one was basically replacing venv creation and docker installation with appropriate alternatives. Nothing needed to change under the hood. While the former one had to be addressed prematurely, the reason being tf1 is not supported by python 3.8. If I were to make the project compatible with python 3.8, I had to test it, and to be tested; the project needs its backbone, aka TF working firmly, which wasn’t, so I had to upgrade TensorFlow to TensorFlow2.0 early. TensorFlow2 has a module tf.compact.v1, which includes all the TensorFlow1. The same was used to replacing existing method calls, and it worked seamlessly.

Poorman's Rekognition is itself a stunning and ambitious project. I have learned quite a bit contributing here, and I have the highest hopes coming weeks will keep challenging me and teaching me new stuff. Since you have made it to the end, here is a meme for you(so you keep coming back for next blog posts….. congo now you know just behind reinforcement learning:P).

--

--