tag:blogger.com,1999:blog-1120685363230208732024-08-28T14:43:39.760-07:00kpoxitI'll share some bits of IT knowledge here: CScience, WEB, Networking, Python, C++, Unix, Linux, Ubuntu, etc.Unknownnoreply@blogger.comBlogger31125tag:blogger.com,1999:blog-112068536323020873.post-18743484984978025252012-05-15T10:33:00.001-07:002012-05-15T10:34:06.994-07:00Never use run_erl<div dir="ltr" style="text-align: left;" trbidi="on"> In one of my projects I used run_erl to launch Erlang VM in daemon mode, rotate logs, etc. If seemed fine to use standard Erlang tool. run_erl is a part of Erlang distribution.<br /> <br /> I discovered unexpected performance problem running production application with run_erl. Application caused high <b>iowait </b>without any reason.<br /> <br /> It was hard to discover the certain reason, but after few hours I found out that run_erl <b>fsyncs on every log message</b>, causing inadequate io load.&nbsp;</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-77675031478503357332011-12-29T13:44:00.000-08:002011-12-29T13:44:41.867-08:00gen_server antipattern<div dir="ltr" style="text-align: left;" trbidi="on">Two gen_server-s should never gen_server:call each other:<br /> <br /> <ol style="text-align: left;"><li>gen_server A calls gen_server B: it sends request message and blocks on receive.</li> <li>gen_server B calls gen_server A: it sends request message and blocks on receive.</li> <li>Neither A nor B can respond each others requests, rendering a deadlock.</li> </ol></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-45743442552168088642011-09-21T05:49:00.000-07:002011-09-21T05:49:06.508-07:00Cassandra migration from 0.6 to 0.7<div dir="ltr" style="text-align: left;" trbidi="on">Cassandra is not mature. I discovered data corruption errors in 0.6. I found nothing that could help me to fix this and I decided to migrate to 0.7 hoping the errors are fixed there.<br /> <br /> All you have to do is to follow NEWS file instructions on migration. But there are three pitfalls:<br /> <br /> <b>libjna problem</b>&nbsp;in DEB package. Ubuntu has earlier version than required by Cassandra. But package installs fine (wrong dependency version numbers). This results to very strange effects and errors. To fix this you want to install libjna manually, as described&nbsp;<a href="http://journal.paul.querna.org/articles/2010/11/11/enabling-jna-in-cassandra/">here</a>.<br /> <br /> <b>Saved caches problem</b>. Before starting up 0.7 you have to manually delete old saved_caches dir.&nbsp;Otherwise&nbsp;you get "Negative array size" exceptions on start up.<br /> <br /> <b>Java heap size problem</b>. After fixing previous problems, I discovered a performance degrade in production.&nbsp;Analyzing&nbsp;this I noticed that Java process occupies 13 GB (of 24) RAM. With 0.6 it took about 1-2 GB. In 0.7 Cassandra init scripts set both minimal and maximal (-Xms - Xmx) Java heap sizes to RAM/2. While it is ok for maximum, setting -Xms to 12 GB means that this memory is not going to be used for your actual data. Cassandra accesses data via mmap, and mmap only accesses data in system page cache. Which is shrinked by 12 GB (Java heap). You have to fix manually /etc/cassandra/cassandra-env.sh and set heap to 2 GB (or so).</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-51083966671743977812011-05-22T04:08:00.000-07:002011-05-22T04:08:12.966-07:00ThinkPad X220<div dir="ltr" style="text-align: left;" trbidi="on">Finally got my X220. First, replaced HDD with Intel SSD 160GB G2. It required small hardware tweak: X220 has 7mm HDD and SSD was about 12mm height. I had to remove plastic frame from SSD.<br /> <br /> DisplayPort turned out to be disadvantage. Only few display models support DisplayPort. And very-very few of them have corresponding cable in the box.<br /> <br /> All the DisplayPort - (HDMI / DVI) cables seem to be halfworking.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-68439032177786867992010-11-16T07:51:00.000-08:002010-11-16T07:53:11.024-08:00When to touch swappinessThere are lots of discussion on the lists on whether to touch or not to tou<span class="Apple-style-span" style="line-height: 19px;">ch&nbsp;<b>/proc/sys/vm/swappiness</b> parameter and there is not definitive answers on that. I figured out a situation when tuning the parameter can really improve the performance.</span><br /> <span class="Apple-style-span" style="line-height: 19px;"><br /> </span><br /> <span class="Apple-style-span" style="line-height: 19px;">On the machine:</span><br /> <ul><li><span class="Apple-style-span" style="line-height: 19px;">RAID-1 of three HDDs</span></li> <li><span class="Apple-style-span" style="line-height: 19px;">12 GB RAM</span></li> <li><span class="Apple-style-span" style="line-height: 19px;">Apache Cassandra instance with 25 GB of data</span></li> <li><span class="Apple-style-span" style="line-height: 19px;">ejabberd instance</span></li> </ul><div><span class="Apple-style-span" style="line-height: 19px;">The IO is created by Cassandra, which reads many random data pages and occasionally writes&nbsp;sequential&nbsp;100-200 Mb chunks of data. Also some IO is created by swapping ejabberd memory in and out.</span></div><div><span class="Apple-style-span" style="line-height: 19px;"><br /> </span></div><div><span class="Apple-style-span" style="line-height: 19px;">So most write load is created by swapping out random ejabberd memory pages. And we know that RAID-1 is N times better on read than on write. Decreasing swappiness parameter from 60 to 20 I moved IO load from write to read. There left almost no random spaw writes.</span></div><div><span class="Apple-style-span" style="line-height: 19px;"><br /> </span></div><div><span class="Apple-style-span" style="line-height: 19px;">The IO load has really decreased. Not a huge optimization, but worth doing.</span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-47088362257435719262010-11-13T08:31:00.000-08:002010-11-13T08:31:09.862-08:00Apache Cassandra experienceAt one of my projects I switched from <a href="http://www.postgresql.org/">Postgresql</a> to <a href="http://cassandra.apache.org/">Cassandra</a>. There were reasons for the switch.<br /> <br /> First. For each user I had to keep an inbox for storing incoming messages and events. What is inbox? It is a sorted collection of items. Items are accessed using ranged queries. This caused huge IO overhead on Postgres, because of <a href="http://kpoxit.blogspot.com/2009/09/postgresqls-huge-disadvantage.html">lack of clustered indexes</a>. All "tables" in Cassandra are clustered, because they are kept as SST (sorted string tables).<br /> <br /> Second. My application had huge write&nbsp;throughput. Postgres is good at write with all that write-ahead logs and&nbsp;absence&nbsp;of table-locks on write. And even after <a href="http://kpoxit.blogspot.com/2009/06/write-heavy-setup-for-postgresql.html">write-aware optimizations</a>&nbsp;it still was not enough. Cassandra's data write process is completely different. And it better suits my needs.<br /> <br /> Third. Application servers are <a href="http://twistedmatrix.com/trac/">Python Twisted</a>&nbsp;applications. There is one Postgres binding for Twisted and it is abandoned and buggy. Cassandra API is available via <a href="http://thrift.apache.org/">Thrift</a>, which in turn supports Twisted. I recommend great&nbsp;<a href="https://github.com/driftx/Telephus">Telephus</a>&nbsp;wrapper for Thrift and Twisted.<br /> <br /> At Cassandra's IRC channel people are telling each other of their Cassandra clusters. I look a bit stupid when saying I have a single node. But who cares? If it works better than Postgres for me - why not?<br /> <br /> Disclaimer: I am not telling here that Cassandra is better than Postgresql. It just suits better in this certain application. I use Postgresql a lot at in many other projects.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-8904532616359130292010-11-09T12:07:00.000-08:002010-11-16T07:53:50.624-08:00Google AppEngine ExperienceAt first glance, AppEngine is <b>really nice</b> with all that cloud-computing. Pay only for what you use. Scale&nbsp;indefinitely. Of course, you have some limitations, like custom (Python or Java) environment with predefined APIs. But APIs are really good and mostly sufficient.<br /> <br /> At second glance, AppEngine is <b>really, really nice</b>! You'll fine a great toolset in SDK and application management console. Version management, quota settings,&nbsp;convenient&nbsp;shell scripts in SDK for deployment and testing. Also log managers, kind of simple profiler, etc. I cant imagine how many efforts were spent on the toolset.<br /> <br /> At third glance you'll find AppEngine <b>unusable</b>.<br /> <br /> <ul><li>After two years of being released there are unexpected errors in the management console. Sometimes I cannot enter it for some hours.</li> <li>When you need to delete a table from datastore - cross your fingers. Sometimes a certain table becomes corrupted and you cannot delete it. Only application recreation helps.</li> <li>AppEngine pricing claims 10 cents for CPU hour. Good. But you have to use the CPU through the API. When I tried to upload my <b>1 GB database</b> to AppEngine, it took some hours of real time and some <b>days</b>&nbsp;of AppEngine CPU time. It cost me about <b>$60</b>&nbsp;just to upload my database! I have to admit, this is hard part. But Postgresql does this database back and forth in minutes!</li> <li>Finally, I managed to port my application and to upload all the data. But the cost per pageview is&nbsp;tremendous. I would cost me hundreds bucks a month instead of current inexpensive dedicated server (which is busy about 10% at peaks).</li> </ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-89587540516898084712010-09-28T06:00:00.000-07:002010-09-28T08:33:04.570-07:00Relay auth to an XMPP componentStandard ejabberd auth modules include odbc, ldap, external (with script). Also there is original module "internal" which hosts users data in Mnesia. But sometimes these modules are not enough.<br /> <br /> Writing a custom auth module for ejabberd is easy. Copy, for example, ejabberd_auth_internal.erl and replace its interface methods with your own.<br /> <br /> Recently I needed to relay auth to an XMPP component. I have to make an IQ inside check_password method. The problem is that check_password is called in user session process and routing for the session does not work yet. This means, you wont receive XMPP stanzas in this method by calling <b>receive.</b><br /> <br /> Take a look at the working snippet:<br /> <code></code><br /> <code><br /> check_password(User, Server, Password) -&gt;<br /> &nbsp;&nbsp; &nbsp;SelfJid = jlib:string_to_jid(Server),<br /> &nbsp;&nbsp; &nbsp;AuthJid = jlib:string_to_jid("profile1." ++ Server),<br /> &nbsp;&nbsp; &nbsp;IQGet = #iq{<br /> &nbsp;&nbsp; &nbsp; &nbsp;type = get,<br /> &nbsp;&nbsp; &nbsp; &nbsp;sub_el = [{xmlelement, "query", [{"xmlns", ?NS_SUP_AUTH}, {"user", User}, {"password", Password}], []}]<br /> &nbsp;&nbsp; &nbsp; },<br /> &nbsp;&nbsp; &nbsp;Pid = self(),<br /> <br /> &nbsp;&nbsp; &nbsp;F = fun(IQReply) -&gt;<br /> &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Pid ! IQReply<br /> &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;end,<br /> <br /> &nbsp;&nbsp; &nbsp;ejabberd_local:route_iq(SelfJid, AuthJid, IQGet, F),<br /> <br /> &nbsp;&nbsp; &nbsp;receive #iq{type = result} -&gt;<br /> &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;true;<br /> &nbsp;&nbsp; &nbsp;Other -&gt;<br /> &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;?INFO_MSG("Auth IQ for ~s failed: ~p", [User, Other]),<br /> &nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;false<br /> &nbsp;&nbsp; &nbsp;end.</code><br /> <div></div><div><br /> </div><div>profile1.server.com is a JID of the auth component. The component&nbsp;receives&nbsp;an IQ with user id and password and returns "result" IQ if auth is fine, "error" otherwise.</div><div><br /> </div><div>The trick here is to use ejabberd_local:route_iq. But then we need to block the call flow of check_password until the auth component returns us the result. route_iq takes a function parameter which is called in local router process. It routes the reply back to a function basing on its own IQ id -&gt; function map. Another trick is to make our original (client) process pid to reside in function's closure. Then we can safely block with receive and wait for a message from the function.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-81100518018724926372010-09-10T12:28:00.000-07:002010-09-10T12:28:43.002-07:00Thinkpad T61 warranty repairI've been using T61 for about two years and a half.<br /> Recently the screen flashed and turned off while I was working. It didnt turn on any more.<br /> <br /> I had checked my serial number at Lenovo site and happily discovered that few months of warranty left.<br /> <br /> It took almost three weeks to fix. They changed LCD display (this was expected), then motherboard and the panel with touchpad. Seems motherboard had burnt out together with LCD. And plastic panel was a bit damaged - they replaced it too :)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-11467657467969356162010-07-26T00:36:00.000-07:002010-07-26T00:36:56.383-07:00Why my Android sucksI've been using HTC Legend with Android 2.1 for about two months. There are few major things that make the experience much worser than it could be:<br /> <br /> <ul><li>The device is unable to play popular video formats (divx, xvid, etc) out of the box. The only usable video format is H264 - you have to convert video to it.</li> <li>Android Market does not work for Russia - what a crap? There are alternatives, but anyways. Apple AppStore is already here, btw.</li> <li>I cannot send files via Bluetooth from device to PC and vice versa. At the same time SE z530i &lt;-&gt; Legend works fine, PC &lt;-&gt; SE z530i both work fine. Maybe Ubuntu at my PC lacks some BT profiles (Bluetooth FTP, I guess). But it is a problem of device, not Ubuntu, because HTC Legend is a consumer product, and it should support everything.</li> <li>I cannot create complex Wi-Fi connections with fixed IP and VPN. iPod can do it easily.</li> <li>It has annoying glitches in base preinstalled software, for example Weather Widget.</li> </ul>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-47507124516673108572010-06-13T12:51:00.001-07:002010-06-13T12:56:23.765-07:00Advance Wars for AndroidAs always, my dream project was already implemented by someone else.<br /><br /><a href="http://en.wikipedia.org/wiki/Advance_Wars">Advance Wars</a>, my favourite game for Game Boy Advance was reinvented for Android.<br /><br /><a href="http://www.larvalabs.com/product_detail.php?app=25">Battle for Mars</a> is a remake of original game, leaving most of tactics gameplay fun but with different storyline and graphics.<br /><br />Actually there is zero storyline, comparing to original game. And wholly, it is actually worser than original game :) But I like it anyways.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-86803515853714729992010-04-20T07:42:00.000-07:002010-04-20T07:48:47.217-07:00BitbucketLast month I've been using <a href="http://bitbucket.org">Bitbucket</a> for hosting my company's <a href="http://selenic.com/mercurial">Mercurial</a> repository.<br /><br />I am admire of SaaS approach, but Bitbucket has succeeded proving me the opposite.<br />My repository is unavailable right now. And this happens much more offen than I expected from service dedicated for code hosting. pull/push operations take long time (up to 5-10 seconds). I have no idea how to make Mercurial so slow.<br /><br />Maybe I'll give <a href="http://www.assembla.com">Assembla</a> a try. But more probable, I'll rent a small VDS at <a href="http://gandi.net">Gandi</a> and deploy my own Mercurial installation there.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-91738739776662764112009-12-23T07:22:00.000-08:002009-12-23T07:24:54.212-08:00flashpolicytwistd Ubuntu packageI've created an Ubuntu DEB package for <a href="http://code.google.com/p/flashpolicytwistd">flashpolicytwistd</a> - a simple Flash Policy Server written in Python/Twisted. This simplifies installation process a lot.<br /><br />Find the deb at project's download page.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-27172909556436122182009-12-21T07:01:00.000-08:002009-12-21T07:04:51.780-08:00CPU benchmarkI wrote a simple Python program, which builds R-Tree index for 100k nodes.<br /><br />The program runs single thread and this means that only a single core of a CPU is working.<br /><br />3m 31.322s Intel(R) Core(TM)2 Duo CPU T7300 @ 2.00GHz<br />3m 2.835s Quad-Core AMD Opteron(tm) Processor 2372 HE @ 2.1Ghz<br />1m 31.393s Intel(R) Core(TM) i7 CPU 975 @ 3.33GHzUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-47975338332936804722009-11-28T08:10:00.000-08:002009-11-28T08:14:08.577-08:00ssh hangup when working via routerI've discovered that my ssh connection occasionally hangs up when I am working through my WIFI router. And ssh works fine when the PC is connected directly to WAN.<br /><br />This might happen because router drops the connection from its NAT table due to inactivity. To aid this, edit (or create) <b>~/.ssh/config</b> and add there few lines:<br /><pre><br />Host *<br /> ServerAliveInterval 60<br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-82100745015799236832009-10-16T05:17:00.000-07:002009-10-16T05:21:07.518-07:00Flash policy serverFlash player uses policy server to check its permission to open sockets to certain ports of certain server.<br /><br />Adobe provides sample Flash policy server. But it is unusable for production. It creates a thread per connection. Also it shows strange virtual memory usage.<br /><br />That is why I wrote simple <a href="http://code.google.com/p/flashpolicytwistd/">flashpolicytwistd</a> using Python/Twisted.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-46775096459613161262009-09-12T18:30:00.000-07:002009-09-12T18:49:19.725-07:00Postgresql's huge disadvantageWhen building a database which has a big deal of ranged queries, it might be extremely helpful to have <a href="http://en.wikipedia.org/wiki/Index_(database)#Clustered">clustered index</a> support.<br /><br />Let say, you keep a table of messages. To query last 10 at inbox it takes about 1 disk seek in worst case with clustered on index by (receiver, timestamp). When there is no clustering, be ready to issue 10 disk seeks.<br /><br />InnoDB and MS SQL Server both have clustered index support. Instead, Postgresql provides CLUSTER command, which must be explicitly issued to rebuild internal database structure to cluster rows according to specified index. In order to keep you DB more or less clustered, you have to cron the CLUSTER command. <br /><br />But:<br /><br />1) CLUSTER takes exclusive lock on table. It took 2 hours to cluster my 3 GB of data. Daily cron would render my application to have 10% downtime. Nice. You can try to aid this by using <a href="http://pgfoundry.org/projects/reorg/">pg_reorg</a>.<br /><br />2) Clustering does not change any logical data, only physical storage layout. Nevertheless, it generates the amount of WAL equal to the size of the data. Again, daily CLUSTER would add 3 GB of backup traffic. Same with <a href="http://pgfoundry.org/projects/reorg/">pg_reorg</a>.<br /><br />All this makes clustered indexing in Postgresql unusable.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-55024543985772862942009-09-06T11:42:00.000-07:002009-09-06T11:43:55.001-07:00Postgresql tuple/row<span style="font-weight:bold;">Q:</span> What is the difference between tuple and row in Postgresql?<br /><span style="font-weight:bold;">A:</span> Tuple is a some version of a row.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-45225249617707496272009-08-17T13:36:00.000-07:002009-08-17T13:52:16.363-07:00Serving static with nginx and varnishI used nginx as <a href="http://kpoxit.blogspot.com/2008/12/how-to-save-60-month-with-20-lines-of.html">reverse-proxy in front of amazon s3</a>.<br /><br />A month ago I decided to try <a href="http://varnish.projects.linpro.no/">varnish</a>. It is designed from the ground up as a reverse-proxy. Also, I though that nginx solution wasted a lot of resources when keeping lots of tiny images in separate files.<br /><br />But, after month of experiments, I discovered high iowait values and severe load on the hard disk, causing service problems. I rolled back to the previous nginx static scheme. Iowait dropped from frightening 100-150 to acceptable 25.<br /><br />I used varnish 2.0.4 running with 3GB file storage. It consumed 0.5-1 GB of memory. Does anyone have a clue why varnish performed so much worser than nginx?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-63377801304898124562009-07-20T09:11:00.000-07:002009-07-20T09:21:27.717-07:00Twisted logging pitfallI run my Twisted processes as<br /><code><br />twistd --logfile /var/log/somelogfile.log --pidfile /var/run/somepidfile.pid -y sometacfile.tac<br /></code><br />Twisted chops and rotates log files by itself. By default it generates 10mb files chunks.<br /><br />When current somelogfile.log becomes larger than 10mb, Twisted moves it to somelogfile.log.1 and continues logging to an empty file. If there are more than 2 chunks, they get their names so that larger number at the end corresponds to an older log. To achieve this, Twisted renames N log files, where N is the number of chunks.<br /><br />In my system there were tens thousands of chunks. I did even realize that rotating them makes a huge stress for HDD, causing unexpected IOWAIT peaks. Moving the chunks to a separate folder eliminated the problem, preventing me from buying some more hardware :)<br /><br />I'll investigate if it is possible to use <b>logrotated</b> or something similar to handle all this automatically.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-46157219055873489482009-06-23T07:11:00.000-07:002009-06-23T07:27:11.777-07:00Postgresql transaction counter (transactions per second)There are transactions counters for each database in a cluster.<br /><br />If you want to find out how many transactions has your system generated at the moment, you should connect to any database as a superuser (postgres) and<br /><code><br />select sum(xact_commit) from pg_stat_database;<br /></code><br />Easy, but took some time to find the recipe.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-112068536323020873.post-90737266895175445522009-06-13T14:44:00.000-07:002009-06-22T01:35:01.083-07:00Write-heavy setup for PostgresqlMy project has a database which is updated almost as frequently, as read. The main bottleneck for the database was disk speed. Here are some tips on how to optimize Postgresql configuration to avoid overusing disk IO. For my case it helped to reduce iowait from ~150 to less then 50 in average.<br /><br /><span style="font-weight:bold;">synchronous_commit</span>. Since users score is not a critical parameter, it is safe to set sychronous_commit to off. The worst thing that can happen is that you loose several last transactions.<br /><br /><span style="font-weight:bold;">checkpoint_segments, checkpoint_timeout</span>. Checkpoint causes all the modified data to be stored in actual table structures. Before the checkpoint happens, WAL guarantees data durability. If you have some frequently modified row, it is checkpointed each time. If checkpoints happen in your database too frequently it is a significant overhead. Increase the parameters to make checkpoint happen less frequently.<br /><br /><span style="font-weight:bold;">Background Writer</span>. It writes dirty pages in background to lower amount of work for checkpoint. Again, some frequently modified value might be written each BW activity round. This is overkill for write-heavy database, because the value will be checkpointed anyways. I turned BW off at all, setting <span style="font-weight:bold;">bgwriter_lru_maxpages = 0</span>.<br /><br />Hope it helps. Comments are extremely welcome.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-81240361161924891922009-04-17T08:15:00.000-07:002009-06-22T01:34:23.544-07:00Postgresql in Ubuntu distributionUbuntu has default limitation for shared memory about 32 MB. This is why (I guess) packaged Postgres has <strong>shared_buffers</strong> parameter set to modest 24 MB.<br /><br />This is quite low value for a large DB and for modern hardware. There are numerous recommendations on how big this value should be. It makes sense to try setting this value and run your benchmarks.<br /><br />To increase kernel shared memory limitation edit /etc/sysctl.conf and add or replace the following (about 110 MB in this example):<br /><pre><br />kernel.shmmax = 110000000<br /></pre><br />Then run<br /><pre><br />sudo sysctl -p<br /></pre><br />to make the settings be effective immediately.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-112068536323020873.post-23426925495463216032009-04-15T14:17:00.000-07:002009-04-16T01:59:43.456-07:00UTC datetime to UNIX timestampUNIX timestamp is somewhat reliable value which does not depend on timezone or daylight saving time.<br /><br />There are number of posts in the "Internets" on how to convert timestamp to datetime. But all of them are either mistaken or consider converting timestamp to local timezoned datetime object.<br /><br />The correct (and awfully akward) mean to convert timestamp to UTC datetime object:<br /><pre><br />from datetime import datetime<br />from time import mktime, timezone<br /><br />def utcdatetime_to_ts(dt):<br /> return mktime(dt.utctimetuple()) - timezone<br /></pre><br />Then you can always:<br /><pre><br />assert utcdatetime_to_ts(datetime.utcnow()) - time() <= 1<br /></pre><br />Check also a better and shorter version in the comments.Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-112068536323020873.post-25702820950488883882009-04-13T10:09:00.000-07:002009-04-13T10:20:15.241-07:00Hosting migration storyWhile our code develops and user base grows, we need to adjust our server hardware to be cheap and powerful enough to handle the load. Here is the timeline:<br /><ul><li>up to autumn 2008: Amazon EC2, small instance: 0.5 cores, 1.7 GB RAM</li><li>up to Jan 2009: Gandi.net: 1..2 cores, 1..2 GB RAM</li><li>up to Apr 2009: Serverloft.com L server: 4 cores, 4 GB RAM</li><li>since Apr 2009: Serverloft.com XL server: 4*2 cores, 8 GB RAM</li></ul>While cloud solutions like EC2 and Gandi.net provide a great deal of flexibility, for us it is still cheaper to stick with traditional dedicated server. Serverloft, while being DS provider, still allows many features previously available only for VDS users: OS reinstall and hard reboot - via WEB interface.Unknownnoreply@blogger.com0