Working on a startup is fun. And exhausting. And exciting. And the list goes on. There are times when you achieve a minor breakthrough, and you feel like the the king of the f***ing world. And there are times, when a small issue brings everything crashing down.
However, in my experience till now, at a startup, I have realized that first you face issues, and then you overcome them. There is never a happy first, sad later. It’s always; always; problem first, solution second. A successful startup starts with a problem!
Let me give you an example. You saw a problem. You thought you can make a business out of it. Problem then solution. You started working on it, and hit a roadblock, you went around it. Problem then solution. Keep counting.
If you are getting happy moments before the sad ones, you probably are not working at a startup. You really should work at a startup. Here’s why: http://yourstory.in/2013/03/warning-working-for-a-start-up-could-make-you-unfit-for-the-rest-of-the-world-in-good-ways/
Last week I was trying to write a script for an application to validate mobile numbers in India. I decided to use kookoo.in, however, it turned out that I could use it to send SMS only to numbers that were not in the DND list. This was a major setback, since a lot of users of the application will be in this list. A research showed that I could use other providers, but they had very big plans for established senders. As a startup, these plans did not fit the estimated expenditure plans for the business. On further thought, I decided maybe we could call the number and a voice could speak the code for users to then confirm on the site. This was still twice as expensive as compared to sending an SMS, but, it was still better than the SMS gateway plans.
Further googling revealed ZNISMS. I could alert their API and their Android app would send an SMS. Pretty cool stuff! However, I cannot send more than 100 msgs a day because of the TRAI regulations again! SIGH!!
A thought then hit me at this point: if I can use Android to send an SMS, I can also use Android to receive an SMS and notify the site. I could ask user’s to message me a code that was shown on the website!
It took about 40 minutes to read about how to make an application to catch an incoming SMS and then process the SMS. A demo application doing the same has been shared on github.com, and can be found here https://github.com/Eashmart/SMSCatcher.
Please feel free to contribute and let me know your thoughts. I’d be happy to share more details.
You can follow on Hacker News here
At Wingify we were used to using eclipse as our IDE. I, however, was never a real fan of Eclipse, so the first chance I got I moved over to Sublime Text 2.
It’s now been about a year since I am using Sublime Text 2, and every day it is making me more and more productive. One thing I really disliked about Sublime Text 2 were some of its key bindings. For example, on eclipse, deleting a line was as simple as CMD+D, but on Sublime Text it meant a CTRL+SHIFT+K. For a Mac user, that a very weird combo.
The last 3 days I have been configuring Sublime Text more and more for making me more productive. As an example I now set up build configurations for each of my projects. So now testing my code is as simple as CMD+B. I dont need to keep on jumping to terminal to run tests. Another example is integrating git commands with key bindings. Using the sublime-git plugin, now committing code is as simple as CMD+CTRL+C.
My next step is to figure out how to configure Sublime Text to load settings based on the project I am working on. Then when working on NodeJS a save can simply restart the node application, and while working on Ruby on Rails the rails server can be restarted, and something to run rake commands.
You can find my key bindings at https://gist.github.com/3903546
Right now I am more excited to talk about my first NPM module, mysqlpool. This little module gives a very simple API to use a connection of MySQL connection pool, and distributes the work load using round robin. Round robin, is by far, is not the most efficient way for load balancing, but I ran a small benchmark to test how the module performs.
The benchmarks were run on 2 Mac book pro’s(different models), on the same WLAN, using Apache ab as the benchmarking tool.
The interesting parts to note are:
- The concurrency was set to 50
- The number of requests was 10000
- ab version is 2.3
- NodeJS version is 0.6.14
- In both cases email@example.com was used
Now lets see the numbers.
Without using a pool, we got
Requests per second: 587.69 [#/sec] (mean)
Time per request: 85.078 [ms] (mean)
Time per request: 1.702 [ms] (mean, across all concurrent requests)
And with a pool
Requests per second: 603.83 [#/sec] (mean)
Time per request: 82.804 [ms] (mean)
Time per request: 1.656 [ms] (mean, across all concurrent requests)
This is was a pretty good improvement given that the concurrency was set only to 50 and the pool was using only 30 connections. A bigger pool and higher concurrency will show a bigger performance jump. The code used for benchmarking is available in the git repository.
PS: I am eager to learn what performance boosts you see, and if there is something you would like to see in the module. Cheers!
While working on our new website at http://visualwebsiteoptimizer.com a colleague asked if we could dynamically arrange the logos of the customers to get the perfect fit. The instant answer was using the jquery masonry plugin.
Unfortunately it was not what we were looking for. Another colleague and I sat down and wrote up a quick mashup plugin using the KnapSack 0-1 algorithm.
We decided to open-source the plugin for the wonderful open-source community we have been relying on for ages now. Have fun with this plugin! Go ahead clone or fork away!
Gathering logs and storing needs a very scalable architecture, and there is a limit on how much you can scale vertically. Horizontal scaling is not very easy to setup and involves data integrity issues.
I present a very scalable and economical method to scale horizontally. Adding an extra node is hassle free and causes absolutely no downtime. So here is the setup in plain simple English.
We will be using NodeJS for its Event based IO and Redis for its pub-sub. Let’s begin!
We setup 2 NodeJS servers. One is publicly accessible while the other is hidden in our network. Let’s call these public and private, respectively. The public server is called by our script and is sent the log. We now add a random string to this data to act as a unique key along with the timestamp. This instance now simply ends the request, and then publishes the log to Redis.
Our private server is quietly listening for any publish commands to Redis. When the public server publishes a command, Redis triggers our private instance which then logs the data to a persistent storage. Timestamps along with the unique key we passed, allow for data integrity and maintaining sequence.
Now when the traffic load increases, we add another public server, and the log queries are randomly sent to one of the two. If Redis begins to reach its capacity, simply make another Redis server, and make half of public servers publish to one Redis and the other half to the second Redis server. The private server now listens to both Redis servers. If the private server begins to show signs of reaching its limit, create another private server and distribute the Redis listeners among these private servers.
A note of caution: If you set up two private servers listening on the same Redis instance, the unique key along with timestamp will assist you in preventing from inserting the request into the database more than once.