I came across a Tony Robbins article on how to make your dreams a reality. I am a fan Tony. Inspiration is very important in my life, and he’s a very inspiring man. The article talks about this framework for achieving your goals. So, I thought, let me take a look at this, see where I am and where I want to be.

What puts me in a peak state?

  • Exercise, especially when I do it early in the morning on the weekends. It brings me a sense of achievement before the day has even started.
  • Solitude, I’m an introvert and I recharge when I’m all alone.
  • When things I do have a meaning. This is more related to work topics, but meaning in what I do is very important. If I have to work on something that I consider does not have meaning or is a waste of my time, I will get drained very fast.
  • Balance. Balance in everything I do, work vs relax, quantity vs quality, etc.

What am I passionate about?

This is a difficult question. The other day I was walking back home with a coworker. We’re chatting about random things and life stuff, and somehow the conversion led to this Mark Twain quote:

The two most important days in your life are the day you are born and the day you find out why.

When I was younger I thought that money was the answer to everything. It’s really not. Money can be used as a tool allowing you to spend time on things you’re passionate about. Without knowing what your passion is, or not having time to pursue it, your life can be very sad. I’m not exactly sure what my passion is. I believe it’s partially something that you’re naturally good at. When I just got into programming, when I was maybe ~12 years old, I was very much into computer security and networking. There was lots of “underground” content back then, which was way way above anything I could comprehend. One of the first real software tools that I built was a LAN scanner built on top of ARP. These days I enjoy building tools because I value my time more than anything else. Looking at the past and present, maybe I am passionate about automation and tooling. If you can’t delegate it, you definitely can try and automate it.

Decide, commit, resolve

One thing that I’ve been working on recently is this new tool set for writing API tests. Not exactly sure where the idea came from, it’s definitely meant to automate boring tasks out of API testing and make developers more efficient in making systems reliable.

So, following the Robbins' framework, I made a decision to build Bluebook. This project is potentially a stepping stone to financial freedom. If I can keep pursuing this goal for long enough.

I also want to be in the peak state most of the time. If we look back at the list above, the peak state is defined by the environment I’m in.

Take immediate, intelligent, consistent and massive action

I’ve been working on Bluebook for probably about a month now. The results are fairly good. The software is not ready to be used yet, but it’s getting close. I am spending 2-3 hours a day working on the idea. Not sure if this is a massive action, but it is consistent and slowly moves things forward.

Improvements to the peak state are a bit more tricky, because the environment I’m in is the primary driver of being in a good mood. To transition into peak state you must perform things that are in your control that put you in this state, modify existing environment, switch the environments, or create the ideal environment yourself. For example, I need to be consistent with morning exercises, or I need to renegotiate how I work and from where I work at my job.

Be SMART

The final step in the framework is SMART, meaning that the goals are Specific, Measurable, Achievable, Realistic and anchored with a Time Frame. I think this step is a little bit where I’m having trouble. I do very little planning ahead. I have a vision, but I don’t have explicit stepping stones that will get me there.

Perhaps in another post I will provide an entire roadmap for Bluebook. Looking at this brain dump, it looks like a good self-reflection.

continue reading

You gotta climb the highest mountain to master the hill
You gotta climb over your ego to master your will

Logic, Never Been

Since my first post on Bluebook I've been thinking of clear ways to explain what's behind the idea and how I think software testing should be performed. This is what we're going to talk about in the first follow up on Bluebook project.

What does it mean to test software?

Testing software at some levels is easy, and at others it's hard. Software testing can be partitioned into four categories.

  • Automated technology facing tests.
  • Automated business facing tests.
  • Manual technology facing tests.
  • Manual business facing tests.
continue reading

In the previous article I showed you how we can run lua scripts inside Go applications. In real world scenarios you'd want to put in more effort to make sure that we are guarded against rogue scripts.

A rogue is a script that:

  • Runs some commands that should not be allowed.
  • Or it steals resources from the main application.

For example, if we're allowing third parties to pass arbitrary scripts to our application, we may want to ensure that this script has no access to networking or system libraries.

Another example of a malicious script, is a script that runs indefinitely. There could be an error in a loop causing the script to never finish. We may also want to ensure that we're having more control around how much memory can be allocated in the VM.

These are the topics I am going to cover in this post.

continue reading

I like creating software. At any given point in time I will have one idea that I'm working on. I'm not a big fan of just building some random stuff, but I want to solve real problems with software.

Hut is the most recent idea I worked on. I wanted to build a simple macOS application for creating mock web services. My target audience was frontend web developers. I wanted to provide a simple way to stub out HTTP services in your JavaScript code, so you can make real requests against fake service. This means that both backend and frontend teams can work independently upon agreed service protocol.

Layerstore was probably the coolest thing I built at my spare time. It took me about 6 months to build entire platform from nothing. Seeing uptrend in microservices and Docker, I figured that there should be a marketplace for selling software packaged in docker images. So I built that. Unfortunately, I decided to kill the idea after Docker released the Docker Store. The Docker Store is exactly what I envisioned, but with already existing user base and brand recognition. There's no need to try and fight the losing fight.

It is a bit disappointing that Layerstore had to be shut down, but I really enjoyed the process, it was a good learning experience and it confirmed that my intuition is worth something. One thing that I regret from all these side projects is that I haven't captured anything while I was in the process of building them. I only produced one blog post on Layerstore architecture after I shut it down, but there could have been so much more.

My boy Gary Vaynerchuk says that the easiest way to produce content is by documenting, not creating.

I have a new idea on my mind. I codenamed it Bluebook. I think that there's something broken about how we test software systems. Microservices as much as I love and hate them, are already creating new challenges around testing systems as a whole. I want to build a platform for managing and running system and integration tests that are more easy to maintain and understand than some custom continuous delivery pipeline composed of scripts. This time I will try to document how software is created and evolves over time. As with any side gigs, my time is limited to mostly weekends and I'll try to do my best to stay consistent with releasing updates every couple weeks.

Stay tuned.

continue reading

When video games are written, the game engine would be written in some low level language such as C or C++ to achieve the best performance. You would need to get direct access to the hardware and such languages are perfect for that. Building software in such languages requires good understanding of OS, memory management and just internals in general. To make game development more accessible, for instance to UI or level designers, we can introduce a scripting language that can hook into the core engine and perform some actions as the game state changes. The idea of scripting your core is amazing.

We can see similar uses in other areas. NGINX web server has support for Lua scripting which allows you to hook into request processing logic. From there you can filter or log requests based on some custom rules. Varnish HTTP cache has some similar functionality which lets you decide how to cache your content. My friend at my old job scripted report generation logic in a C service, which allowed us to get arbitrary aggregations from raw data available in the memory. There are plenty of use cases for scripting the core of your technology.

In this article I want to take a look at how we can embed Lua language into Go applications.

continue reading

As an engineer, I like the idea of microservices. The microservices architecture is the ultimate playground for distributed systems. Despite all the nice things that come with microservices architecture, I want to argue that startups really don't need microservices. I was lucky enough to see SOA, monoliths and microservices throughout my professional career. Today, microservices are receiving lots of buzz, and many big and small companies are jumping on this overhyped bandwagon. Microservices is a good thing, it's a great example of what the future might look like; however, if you're just starting out with your tech company, you do not need them.

continue reading

If you are running Docker as part of your infrastructure you probably are also hosting a private Docker registry for storing private Docker images. Vanilla installation is pretty good, you just put the Docker Distribution in a private VPC and you are good to go. Let's imagine a scenario where you wanted to build a public registry with custom access control to the images, something similar to Docker Hub. How would you do that? Good news is that I built exactly that when I was building Layerstore and in this article I'm going to show you how you can do it yourself.

Before we go into nitty-gritty details let me give you some background on Layerstore. Layerstore was Docker marketplace where anyone could sell Docker images either as individual images or as image bundles. The entire life cycle of a sale might look something similar to this:

  1. Seller reserves image identifier. This identifier will be used to push and pull images from the registry.
  2. Seller receives read and write permissions to the reserved image identifier.
  3. Seller uploads the image with docker push command, configures product page and sets the price.
  4. Purchaser buys the product and receives read access to the image.
  5. Purchaser downloads image onto his servers with docker pull.

We are going to explore these steps in detail in a moment. Of course I am going to skip irrelevant product parts and concentrate mostly on Docker registry and services surrounding it.

continue reading

Eye catching chart showing memory utilization before and after the fix.

Sometime ago I wrote a worker that periodically polls third party service for data. We started noticing that the worker process gets killed by the kernel for reaching memory limits. The container for the worker was given 512MB and that should be more than enough for the job it was doing. The amount of data it fetches can go anywhere from 25MB to a 100MB and it uses this data to sync some internal state of our systems with the data provided by the third party. I was able to find weird memory consumption patterns and refactor the code to take memory usage from ~50% to ~13% and stop getting worker process OOM killed. This post is about the tools I used to find memory problems in a Python application.

continue reading

Docker has gained lots of popularity in the recent years. Thanks to the movement towards microservices, we are able get docker infrastructure from all major web service providers like AWS or Google Cloud.

This post is more of a tutorial style post in which I'm planning walk you through how we can bootstrap fully operating docker infrastructure from scratch in AWS using Terraform. Managing infrastructure by hand is terrible. I'm not going to go into details why is that, but I believe Terraform is going to be one of those tools that will stick with us for a while, especially once it gets more mature. We're going to use Terraform to build our infrastructure.

continue reading

Locks are very important in distributed systems. Sometimes we want to make sure that only one job runs at a time, but we want to have the system highly available (e.g highly available cron server). Of course there are many other uses cases for distributed locks, which I'm not going to talk about. In this post I am going to show you an example of how to implement distributed locks on DynamoDB.

continue reading

I had an opportunity to work on an interesting infrastructure challenge. It goes something like this: we need to be able to persist incoming data stream which consists of approximately 200 thousand messages/second, we also need to guarantee data availability and redundancy. This is a typical scale of data I used to deal with at Chartbeat on a daily basis. When working with such high traffic you're most likely going to run into the questions to which you might not know the answers right away.

  • How many servers do we need to to handle such traffic?
  • Do we need to store the data and how can we do that?
  • If we must store the data, for how long are we going to need access to it?
  • How much the new infrastructure is going cost us?

These are just a few questions that you will have to answer in order to pick the right tools for the job. In this post I will try to provide the answers to some of these questions and also show you a sample infrastructure setup that can be used to handle large amounts of traffic while abiding our requirements.

continue reading

At Chartbeat we are thinking about adding probabilistic counters to our infrastructure, HyperLogLog (HLL) in particular. One of the challenges with something like this is to make it redundant and have somewhat good performance. Since HyperLogLog is a relatively new approach to cardinality approximation there are not many off the shelf solutions, so why not try and implement HLL in Cassandra?

continue reading

In the previous post I covered linear regression which can be used to predict continuous values. This post is a sequel to simple linear regression and will talk about weighted linear regression.

continue reading

This is my first post on machine learning, and hopefully not the last one. The main goal of these posts is to serve as a quick reference for simple machine learning problems and their solutions, meanwhile allowing me to get a better understanding of the field itself. That said, don't take anything for granted.

continue reading

In one of my fixes that I was working at work I had to implement row level locking in Django. Current stable, 1.3, version of Django does not have built-in capability for row level locking on InnoDB tables. The good news are that the development version already has an update in QuerySet API that will let you use select_for_update method to acquire a write lock on rows matching your query. If you can use development version for your project you may stop reading and go upgrade Django, otherwise I will see you at the bottom of the page.

continue reading
Bluebook - API Testing for Developers

API, end-to-end, and integration testing made simple.

Try Now

Subscribe

Subscribe to stay up to date with the latest content:

Hut for macOS

Design and prototype web APIs and services.

Download Now