GeoFront: An Initial Concept

Note: Me and my coworkers are (very slowly) working on this! Check on Github! We hope to have some proof-of-concept in a few months time.

Since I joined the Geological Survey of Brazil, I’ve had some issues with geophysical data processing softwares – mainly for those who are designed for grav/mag/gamma datasets. The softwares lacks either usability (UX Experience) or lacks performance. The GeoFront (a wordplay with the ‘Neon Genesis Evangelion’ series) comes to solve both of this problem while providing a new model for data processing.

Continue reading

The Search Dilemma: Part 2

One of the most interesting things I’ve come across during my masters years is how applications behave. Obviously, some applications are more prone to code optimization than others, and those applications will most likely be composed of some cpu-intensive functions that may eventually turn into bottlenecks if put in a heavy-load production environment. One of my hypothesis to maintain quality of service and reduce energy consumption consisted in analyzing that specific hot function and monitoring threads – a thread in a core would eventually have its operating frequency upgraded while executing that function and, after it exits the hot function, the operating frequency would be degraded. The assumption is that some functions does not need to execute as faster as the hot function, and hence is consuming more energy.

Continue reading

The Search Dilemma: Part 1

Have you ever thought what if a search engine (like Google, Bing or Yahoo) took hours to answer your search queries? Well, neither do I. But I presume that most people would be angry and just stop using them. This assumption is corroborated by a 2009 study[R1] that revealed that a delay of 2 seconds in delivering search results may impact companies’ revenue in over 4% per user; in other words, slow answers equals to less cash flow.

Big companies have many ways to address this (quality-of-service) issue and make this response time faster: the most obvious of them is simply deploying faster processors, more memory caches and upgrading network speed for distributed computing. However, this approach is not really the most efficient as there are financial (deploying more servers cost money) and spatial (your datacenter has limited space) constraints. Jeff Dean[R2] shows some manners to circumvent these constraints and maximize the system’s efficiency while guaranteeing the same quality-of-service for all users. I’ll discuss one of them here.

Continue reading