Updates to the API, Extension and Crawler


Hi folks!

This post was way overdue but in the recent month we have worked a lot to improve the majority of our systems.


We have moved over to a new crawling system. The previous one has used PhantomJS, which is essentially deprecated and ran on an old version of Safari.

Now we use the latest Google Chrome to crawl our documents which brings new features, including the following improvements:

  • Less prone to crawling errors
  • A lot faster (like a lot)
  • Crawls Javascript Sites

In the ongoing months we have been proposing a migration to a whole new crawler located on a different backend (rather than the current Ruby implementation). However, this is currently only in planning, so do not expect it anytime soon.

API and Crisp

Rather medium changes have been made to Crisp. Previously, we would get all data from Postgres ‘as is’, meaning its up to postgres how data gets returned (the order). As of today, we honor each points assigned case weight. 100 gets shown on top, while a lower weight gets shown below it.

0 = lowest
100 = highest

This change reflects both API and UI, meaning it also affects the Extension in a couple hours, once CDN propagation is complete.


Thats right, after almost 3 Years, we have overhauled the extension!

Several improvements have been made, including:

  • Settings have been added
  • All data is now live in correlation with CDN PULL Intervals and Phoenix Data
  • Upgraded from Bootstrap 3 to 4 (which makes it look a lot better)
  • Enabled the ability to disable Notifications
  • Shows Privacy Shields
  • Moved to Fontawesome for icons
  • Fixed a lot of nasty bugs

But the most important bit is, we have automated all releases. Error checking, releasing and publishing is now fully automatic thanks to our CI!


Awesome job. Congratulations! :partying_face:

Does it mean the Ruby crawler should be working, now? I’ve tried many times to crawl today, but all I got was Heroku error pages… :disappointed: