Switch to Normal Mode

USPSOIG.gov Rebuild

Posted: November 30, 2015 12:22:55 • By Natasha L. • 834 words
  • URL: uspsoig.gov (Launch Announcement)
  • Started: February 2015
  • Deployed: October 1, 2015
  • Role: Full-stack developer
  • Team Size: 5
  • Tech: Drupal 7.x, Python, PHP, Bootstrap, LESS

Screenshots

USPSOIG.gov Rebuild - Homepage (Top)
USPSOIG.gov Rebuild - Homepage (Bottom)
USPSOIG.gov Rebuild - Document Library
USPSOIG.gov Rebuild - Homepage (Mobile)

Even though this project didn't primarily involve Django or Python, this was one of my favorite projects at the USPS OIG, and probably the only time I've ever genuinely enjoyed working with Drupal.

Goals

  • Give the USPS OIG public website a fresh new theme/look.
  • Make the site work properly on mobile devices and small screens.
  • Re-organize content/structure to make the site easier to navigate.
  • Make the Document Library more user-friendly, with granular filtering options.
  • Rebuild the Online Complaint Form to make it easier to use, and to better tailor it to the typical information provided by the general public.
  • Improve the Blog and Audit Asks (announcements of, and public feedback for, the Office of Audit's projects and reports) sections to better distinguish them from each other, and to make them more effective at their respective goals.

Work Performed

My work on this project ran the full gamut; CSS/LESS and JavaScript coding, accessibility testing and improvements, design consultation, back-end architecture, custom PHP coding, and infrastructure engineering. One of my primary tasks was to develop a way to transfer content from the old version of the site to the new one on a regular basis, while we were developing, so that the dev copy of the site had fresh content and could more easily be pushed to production when the time came. My teammate and I accomplished this by using the Feeds module, along with some other related Drupal modules, and a custom view on the old site that could output a customized XML schema (standard RSS couldn't handle the large quantity of custom fields, unfortunately).

This project also featured a somewhat different development process for our agency, in an attempt to streamline our development processes, and I had a key role in setting that up. Specifically, I built the servers required for this, and I wrote a series of user-friendly Python scripts to seamlessly push the site from the dev environment (which was only accessible to the development team) to the QA environment (which was accessible to managers, and used for regular milestone meetings), and to pull data from the old version of the site into the dev environment (which couldn't access that URL directly).

When deployment day came, I was primarily in charge of the deployment to production, and more of my custom scripts helped with this task. Thanks to my aforementioned integration and automation work, the new version of the site had all of the old site's content, up to the date of launch, but it did not have the old site's comments. Trying to match these up proved to be difficult, due to the extensive restructuring, and the production server didn't have Python installed, so I wrote a custom PHP script to pull all the comments from the old database, match them up with existing content as best as it could (including the dozens of edge cases that had to be handled), and generate a spreadsheet report that could be reviewed by the Communications team. It also generated a separate spreadsheet containing the orphaned comments that could not be matched up to new content. But, on launch day, out of over 10,000 comments, less than 100 were orphaned (all of which were expected to be orphaned), and all of the rest were matched correctly.

I was also responsible for fixing a post-deployment bug in the Twitter feed on the site; the third-party PHP widget that came with the template had no load-balancing, and with the amount of traffic to the site, it was blowing through Twitter's API limits within a minute or two every hour. Having done similar work with the Twitter API before, I knew exactly how to fix it. So, I heavily modified this widget, giving it the ability to cache the retrieved Twitter feed globally, and I added a load-balancing algorithm so it would only pull new data once every 5-10 minutes, and only if it wasn't running out of API calls. Lastly, in the event of both cache failure and data retrieval failure, I improved the failure state of the widget so it would simply go blank, instead of exposing errors to the general public, or showing empty tweets due to attempts to parse a malformed/empty array.