When I was waiting for my lunch today, I was flipping through a magazine and saw a little blurb about this… I guess it’s a “game” for 3G cell phones. Basically a virtual girlfriend… But the thing I thought was hilarious (genius actually) was you need to buy her things to keep her happy. The catch? The things you buy her (which are virtual of course) cost real money. Why the hell didn’t I think of that?
I had an idea this morning.. Why not write code to mimic evolution where needed?
For example a search engine… their algorithms are constantly being changed and updated to fight spam and whatever else comes along.
It wouldn’t work as well on a small scale, but if you have a large number of end users (like a search engine), I don’t see why it wouldn’t work.
Have the system make hundreds or thousands of small random iterations of the existing algorithms (small enough that it doesn’t destroy the results). Then have it feed end users with a random version of the algorithm and keep track of how well it was received. You could say the deeper the user needed to look in the search results, or if they immediately needed to do a different search, then it wasn’t that good of an algorithm change. Then give the “good” changes more weight than the bad, and repeat the process.
It’s survival of the fittest, and becomes an semi-automated way to evolve their algorithms.
Mucked around with the Yahoo API a little bit, and I must admit it’s pretty nice. I wish the Google API was as fast and had as much features.
My first useful thing I did with it was I added to the keyword tracker so it can track historical search engine rankings for Yahoo (I also added MSN while I was at it).
Turned out pretty cool… now you can see ranking changes for all three search engines over time on a single chart. Weeeeeee….
I played a good round of disc golf with my dad today. Missing my best score by one, and he played the worst game I can remember him playing. I beat him by 15 strokes (I ended up 4 under par, and he was 11 over. This one was cool because I did not get any bogies for the first time in my life.
I’ve spent a few weeks now with the new Xserves (I’m going to put them physically into the data center tomorrow BTW).
For the most part, Mac OS X Server 10.3 is a very nice server platform (especially when you are using Xserves). But maybe someone at Apple will read my little wish list, so here goes…
- Make a system-wide shutdown script option. There is a bug in MySQL that prevent the mysqld process from shutting down with the normal kill function, so you end up with corrupt databases on a reboot or shutdown. This could be prevented with a shutdown script that shutdown the mysqld process with mysqladmin.
- Actually USE the shutdown scripts in /Library/StartupItems/ (this could also solve the mysqld problem).
- Server Monitor application should be able to monitor the health of the disk RAID when using hardware RAID (not just software RAID). I wrote a cron job that runs every 5 minutes to do this, but still… it should be standard I think.
- Something should notify me about high loads on the server. I wrote a cron job that runs every 5 minutes to do this as well, but still… it should also be standard I think.
- IP Failover is nice, but if the backup machine doesn’t give back the primary IP fast enough when it comes back online, you end up with NEITHER server utilizing that IP address. I run a script to do some clean-up before it relinquishes the IP address, which can take a little bit. The primary machine thinks another machine as the IP, and stops checking to see if it will give it up at some point. I solved this by making a script that executes after the IP is relinquished, that essentially shuts off the Ethernet port on the master, then turns it back on. This one could be catastrophic if you just assume it took back it’s IP address.
- Give me an API to the Server Admin application. Would be nice to get the nifty graphs and stuff for MySQL transactions for example.
I got a fortune cookie today that I’m pretty sure will come true…
You will turn one year older this year.
Wow, a very profound statement. heh
Today at the SES show, Yahoo announced they are following Google’s lead and allowing people to utilize an API for making automated queries without violating their terms of service.
In some ways it’s better than the Google API because it allows you to do more than just web search (you can search images, local, news, video and web). If nothing else, maybe it will be the incentive that Google needs to come out with a new version of their API (the Google API is “beta”, but has not been updated in over 2 years).
One thing I don’t like about the Yahoo API is it’s limited to 5,000 queries per day, per IP address. Google’s method of 1,000 queries per day, per person lets you build killer web apps (like the keyword tracker). But the IP address limitation does not make that practical with the Yahoo API. Either way, it’s really cool they have one now.
Check it out over here: