Quote:
If you ride a motorcycle often, you will be killed riding it. That much is as sure as night follows day. Your responsibility is to be vigilant and careful as to continue to push that eventuality so far forward that you die of old age first.
I renamed one of my source folder's top-level folder and watched SyncToy rename/move every file under it for the target folder. It is not smart enough to recognize that only the top-level folder has changed. While the final effect is the same, it is not as efficient: one vs 1,000+ renames.
As an end-user, I expect SyncToy to handle this case. But, how would I do it as a programmer? (Note that I have added files to the src folder, so both folders will not match exactly.)
My question to HDB:
There has been an increase in the number of birds in the vicinity of my flat. I have to bear with their annoying cries from morning till evening, especially over the weekend — when I'm at home the whole day.
I only have one question: can I take action to exterminate them?
I used the word "can". It should be "may".
Update (29/6): HDB called and said it falls under the responsibilities of the Town Council.
I finally downloaded my old Web contents from my ISP, GeoCities, Tripod and MegaOne. I lost the local copies when my HD crashed in 2001. As you can see, I haven't touched the stuff in years.
Other than my ISP, the others are free. To tell the truth, I'm surprised they are still around. Also, I thought my webpages would be shutdown due to the lack of views.
My ISP only offers a pathetic 2 MB of web space — even today! That drove me to look for free website hosting. I quickly found three: GeoCities (15 MB), Tripod (20 MB) and MegaOne (50 MB). 50 MB was HUGE in 2000, especially for a free service.
Obviously, the free services come with a string of limitations. (There's no free lunch.) All three add ads to your HTML page intrusively. MegaOne is the most restrictive: it only allows HTML and JPG, no JS, no TXT. GeoCities limits my bandwidth to 4 MB/hr! (I don't recall it being so low in the past.) Anyway, GeoCities is ending its free webhosting service.
(In 2002, I had enough of the restrictions, so I signed up with a real webhosting service. I'm still using it today.)
Now, all three allow you to upload multiple files at a time, but none allows you to download at all! You have to go to their File Manager and click one file at a time, as if you are viewing them. As a result, I wasn't able to get the original HTML files, not that it matters.
It took me over three hours to retrieve all the files. I considered writing a script to do it, as it could restore the file's original timestamp as well. However, it doesn't really matter.
It's interesting to compare then and now:
Then | Now | |
---|---|---|
HTML | 3.2? (no DOCTYPE) | 4.01 strict |
CSS | No | Yes |
Layout | Table, no DIVs | DIVs |
Image | 705 x 500 (3:2) | 800 x 600 (4:3) |
Thumbnail | 126 x 83 to 185 x 122 (manual) | 267 x 200 (auto) |
Processing | Watermark, USM (sharpened) | None |
Quality | q6 (PhotoShop) | q85 (ACDSee) |
I don't really like the old images. They are over-sharpened (looked okay on CRT monitors), are saved at too a low quality (q6 instead of q8 that I now use in PhotoShop), and have a weird color-cast due to the way I scanned the images.
I may put some of contents on my own domain, so that they can be viewed without the ads. I may even rescan the original negatives again, since the original scanned images still have the color-cast. This time, I'll leave it to the software!
My webhosting provider has a pretty sensible setup for their Apache server, so I never looked closely at the HTTP responses — until now.
I wanted to check whether the output of the Newest Blog Entries script is gzipped or not. To my pleasant surprise, it is!
But.. how come my JS and CSS files aren't?! This is important because jQuery 1.3.2 is 56 kB uncompressed, but only 19 kB compressed. Is mod_gzip enabled? Yes, it is; HTML files are gzipped.
While referring to the "reference" mod_gzip config file to enable gzip for JS/CSS files in .htaccess, I realized what was going on. My provider is using the same file to cater to Netscape Navigator 4! This browser does not handle gzipped JS/CSS files properly. But, when was the last time you saw this browser?
After I made my changes, I verified that JS/CSS files were indeed gzipped. Never assume.
VPN cuts off access to your home network. It makes sense from a security POV, but it is very frustrating if you need to access local resources (printers and shared folders).
It turns out my VPN SW merely manipulates the routing table to achieve this. (I believe it is the same for most VPN SW.)
Routing table before VPN:
Network Destination Netmask Gateway Interface Metric 192.168.1.0 255.255.255.0 192.168.1.1 192.168.1.102 75 192.168.1.102 255.255.255.255 On-link 192.168.1.102 81
A quick explanation: network/netmask identifies a subnet. Addresses of the form 192.168.1.x (x = 0 to 255) will go through 192.168.1.102 (which is my PC). Otherwise, they will be sent to the gateway. It goes without saying that 192.168.1.x is my home network.
The second line says that any traffic sent to 192.168.1.102 will be sent through On-link, which is my WiFi interface.
Routing table after VPN:
Network Destination Netmask Gateway Interface Metric 192.168.1.0 255.255.255.0 192.168.1.1 192.168.1.102 4500 192.168.1.0 255.255.255.0 On-link 10.100.100.100 30 192.168.1.102 255.255.255.255 On-link 192.168.1.102 4506 192.168.1.255 255.255.255.255 On-link 10.100.100.100 281
The VPN SW added the same entry, but to a different interface and with a much lower metric (cost). Guess which one the OS will use?
(Also, the fourth entry routes the local broadcast traffic to the VPN.)
I am unable to access the local resources:
Ping 10.100.100.100 (VPN gateway) | Ok |
Ping web-proxy (in company's network) | Ok |
Ping 192.168.1.1 (router) | Timeout |
Ping 192.168.1.99 (printer) | Timeout |
Let's make things right:
C:\> route delete 192.168.1.0 if 19
(if 19
is the VPN's network interface.)
Routing table after deletion:
Network Destination Netmask Gateway Interface Metric 192.168.1.0 255.255.255.0 192.168.1.1 192.168.1.102 4500 192.168.1.102 255.255.255.255 On-link 192.168.1.102 4506
Now I am able to access the local resources:
Ping 10.100.100.100 (VPN gateway) | Ok |
Ping web-proxy (in company's network) | Ok |
Ping 192.168.1.1 (router) | Ok! |
Ping 192.168.1.99 (printer) | Ok! |
This works with two caveats:
I have over 200 emails in my Inbox even though I have a habit of categorizing them after reading. I spent almost an hour to prune the list down to 8 emails!
One of my resolutions is to keep the Inbox to 10 emails at most. It's like my physical in-tray (if I have one). I like to keep it neat.
I'm going to create two sub-folders pending and deferred to hold items that I don't have time to look into. (Is this cheating? It's how you look at it.)
I was so appalled by the size of my homepage that I decided to benchmark it against several websites.
Website | #CSS | #JS | #IMG | Others | Size (kB) |
---|---|---|---|---|---|
smallapple.net | 4 | 4 | 34 | 1 | 643 |
apple.com | 2 | 14 | 22 | 5 | 366 |
google.com | 0 | 1 | 2 | 3 | 24 |
hp.com | 1 | 9 | 20 | 4 | 859 |
yahoo.com | 0 | 20 | 27 | 2 | 167 |
(The counts may not be accurate.)
My website was 725 kB before I enabled gzip for JS and CSS files.
Yahoo delays loading some of their images. The full size is over 2 MB.
Opera also delays loading the images until they are displayed. That really helps my website a lot — the size is almost halved.
AJAX apps do not work well with the browser Back button. This is because AJAX apps use the same URL all the time, so they are "single-page" as far as the browser is concerned.
As much as I like to ignore it, the "Back" button isn't going to go away. I use 3 different browsers and guess what buttons I have on my toolbar? Back, Forward, Reload and Bookmarks — nothing else, not even the menu.
The good news is, it is no longer difficult to support the browser Back button. The basic technique:
Note that as long as the user remains in our AJAX app, it is our poller that loads the new page, not the browser. The polling frequency doesn't really have to be very high. 100ms to 250ms should be fast enough.
This technique works like a charm in IE 8, FireFox 2+, Opera 9.5+, Safari 3.2 and Chrome 2.0. Although this method was originally a side-effect, it is so important to AJAX apps that it is likely to be supported by future browsers.
Notice the lack of IE 6 and IE 7? IE refuses to create a new history
entry. Luckily, people have perfected a convoluted method over the years. The
basic idea is that loading an iframe
will cause IE to create a
new history entry.
The only problem with the IE workaround is that my homepage no longer validates as HTML 4.01 strict.
To keep as 4.01 strict and fail, or to convert to 4.01 loose and pass? (You'll know the answer if you click on the W3C button.)
Image preloading techniques are a dime a dozen, but image deferring techniques are much rarer.
It is very simple:
$("<img>").attr("src", fname);
Even the DOM way is simple:
var obj = document.createElement("img"); obj.setAttribute("src", fname);
Deferring is not as popular as preloading, but it is essential for a multi-tab page. You just need to load the images for the active tab.
Deferring requires some trickery. First, we remove the src attribute so that the browser will not load it.
$("img").each(function() { $(this).data("org_src", $(this).attr("src")); $(this).removeAttr("src"); });
Later, at the appropriate time, we restore it:
$("img").each(function() { $(this).attr("src", $(this).data("org_src")); });
jQuery makes it simple due to its data()
facility.
Anyway, this is the theory. It doesn't work for me in IE 7 and FF 3 with
jQuery 1.3.2. By the time the code is called in
$(document).ready()
, the images are loaded already.
FEWER people are taking cabs these days as more hop on buses and trains, or cut back on travel expenses.
On average, there were 893,674 taxi trips made a day in the first three months of the year, the lowest in five years, after recovering steadily from a post-Sars low.This is about 8,805 fewer rides a day than the same quarter last year, and down another 74,673 rides from 2007.
For commuters, this means fewer complaints about waiting times at taxi stands.
A Land Transport Authority (LTA) survey of 30 taxi stands in the city found that year-on-year, January waiting times were shorter, while in February, it remained largely unchanged.
For taxi drivers, the news was not so good.
As expected, the dip in ridership had hit the pockets of cabbies. Drivers were earning about 10 per cent less, said ComfortDelGro, the largest of six taxi companies here, with 15,000 of the 24,000 cabs on the roads.
After deducting rental and fuel costs, a taxi shared by two drivers working in shifts collected $166 a day, going by the meter readings of 8,000 Comfort cabs monitored over the first five months of the year.
This worked out to an average of $83 a driver a day, compared with $94 from a similar survey in April last year.
Call bookings - a more lucrative source of income for drivers than cruising for passengers along the streets - were also down, said a company spokesman.
Cabbies such as Mr Yeow Khee Beng were feeling the pinch.
The 62-year-old, who has been driving a taxi for more than 20 years, said he pocketed about $30 to $40 a day this year after paying his rental and his diesel charges, which came up to $80.
'Last year, I could get $50 to $60 a day. Nowadays, you see fewer people waiting on the road for a taxi or even in the taxi queues. It is harder,' he said.
The acting adviser to the Taxi Operators' Association (TOA), Aljunied GRC MP Yeo Guat Kwang, attributed the dip to a combination of Singaporeans tightening their belt, better public transport and fewer tourists.
Legal executive Nadia Tan 35, is among those who are giving taxis a miss these days.
She no longer takes cabs home after work because the higher surcharges - a 35 per cent peak hour surcharge and a $3 city surcharge - are 'just too crazy'.
Instead, she tries to leave her Shenton Way office slightly earlier to beat the evening rush on the MRT.
Commuters like her have made public transport more popular. For the first quarter this year, 4.87 million rides were made a day on buses and trains, compared with last year's 4.78 million.
Mr Yeo said: 'The opening of the Circle Line, more premium buses...it is definitely hitting us.'
But the TOA, which represents the taxi drivers' associations, said it was not overly concerned about the dip in numbers at the moment.
Currently, most drivers tell Mr Yeo they earn about $5 less a day, which is one or two fewer trips a day.
Helping to cushion the impact, about $13 million in rebates from the Government's tax relief measures that taxi companies received have been channelled back to the drivers.
The worry is whether the downward trend will continue, said Mr Yeo.
The association, together with the taxi companies and the LTA, are working together at see how they can help, he said.
For a start, the LTA is providing $1 million to help promote taxi ridership.
This money, to be matched by the companies, will be used to hold promotions, possibly discounts and tie-ups with shopping centres and tourist attractions from next month.
The TOA, on its own, will target car owners. Mr Yeo said: 'We want them to give up their cars for a cheaper, but just as good, taxi ride.'
Also from next month, the association will hold training sessions for drivers at motor workshops so that they can upgrade their skills while waiting for their taxis during monthly mandatory checks.
Cabbies will also get a $10 allowance for attending these one- to two-hour sessions.
Commuters must feel like they are getting better service standards if they going to pay more for a taxi ride, said Mr Yeo.
Time to reduce the surcharges.
I was at Bugis on Saturday evening and I was surprised that there was no (human) queue at the taxi stand!
It is not uncommon to see people leaving objects such as plastic chairs to reserve parking lots for their cars.
But a reader seems to have encountered an entirely new way of reserving parking lots - using the body.
The reader told Stomp she encountered a woman who tried to reserve a parking lot at a public carpark by standing in the lot with her mother. She daringly refused to move when he was backing into the lot.
When the reader questioned her where her car was, she replied that it was "on the way".
The reader said, "Since her car was nowhere in sight and not even in the line of cars being held up behind me, I continued backing into the lot."
The reader said another female driver who witnessed the incident intervened by telling the woman there were more empty lots further up.
The woman's mother then moved out of the way but the other stayed put and started banging on the reader's car boot.
It was at this point the reader decided to call the police.
Aware that the police could do nothing about it, the reader said the police report was a preemptive measure in case something happen to her or the car.
The reader added that she did not want these people thinking that they can get away with ganging up on a lone driver.
I hardly think this is a new idea. It's not common in Singapore, but I've seen it done before. When parking is tight, you send "scouts" from your car to seek and "chop" the space.
The turn signals on my YBR stopped working again. Okay, I moved the flasher unit when I removed the battery for charging, but it shouldn't have any effect, right?
Well, it turns out that the flasher unit must be level for it to work properly!
Because I put a few months worth of entries in a single HTML file, it can grow to over 100 kB. It is better to serve just the newest entries. It will download faster and the user is also not presented with a long list of entries, most of which he is not interested in.
I wanted to write a "state-of-the-art" AJAX app to do it, but this
requires PHP support on the server to avoid downloading the whole HTML file
(which defeats the purpose otherwise). I did not proceed because the entries
will no longer be in index.html
and will not be indexed by
search engines.
Then, I googled and found that PHP 5 supports HTML parsing. That's great!
I whipped up index.php
that parses the HTML file and presents
just the newest entries. The resultant served HTML file is in the
neighbourhood of 10 kB. That's 10x smaller!
The PHP script even caches the data so that it does not have to parse through the HTML file all the time, only when it changes. It is quite easy to implement caching in PHP, so there is no excuse not to do it. Just save everything to a file and compare the files' last-modified timestamp.
I put in some special handling:
Last-Modified
and
If-Modified-Since
headers must be handled manually.Note: the Entries on the right are "broken" in the sense that they are no longer permanent links. Oh well, change always come with side-effects. I'll leave it as-is for now.
I buy Inuyahsa out of habit. I have actually lost interest in it quite a while back. The plot is too simple. The master villian spawns off an underling, which our heroes spend some time to overcome. A single fight can take up half a volume or more. When he is killed, repeat for the next underling. Meanwhile, the master villian is invincible, but nonetheless is easily subdued in the last volume. This goes to show it is all arbitrary. (I read the author was tired of this manga, hence she put an the end to it.)
I have more incomplete manga series than completed ones. I lost interest in them midway, because the story got dull or changed direction.
I actually have one rule when it comes to buying manga: buy completed series only. However, I never really follow it.
Let's test modifiers. We define:
.green { color: green; border: green 1px solid; } .blue { color: blue; border: blue 1px solid; } span.magenta { color: magenta; border: magenta 1px solid; } .widget { color: red; } .widget.blue { color: blue; }
: correct. Order:
: correct. Order:
: correct. Order:
: not green! Order:
This case is tricky. The reason green
is applied first is
because it is defined first in the source code. If I put green
last, the widget is green! This applies for rules at the same level.
In any case, we cannot use global modifiers most of the time because they are applied before the more specific rules. (Unless you make them important, which I prefer not to.)
Suppose we want small, normal and large widgets. It's easy to define three
classes: small-widget
, widget
and
large-widget
.
However, I want to define just one main class widget
and
use modifiers to get small and large widgets.
/* Normal widgets */ div.widget { color: red; } /* Small widgets only */ div.widget.small { color: blue; }
It's easy to use:
<div class="widget small"> ... </div>
I used to have problems with this technique in the past because I tried to use,
.small { color: blue; }
and it doesn't work! (Color remains red.)
But I think I have figured out why. CSS is stacked from the most general to the most specific:
Simplest way to flush a set of buttons to the right:
<div style="background-color: #eee; overflow: auto;"> <input type="button" value="Btn B" style="float: right;"> <input type="button" value="Btn A" style="float: right;"> <div class="clear"></div> </div>
(IE 7 does not display it properly. I don't know which IE quirk I triggered.)
Notice that we have to define Btn B before Btn A in the code! I find it confusing. I prefer to do this:
<div style="background-color: #eee; overflow: auto;"> <div style="float: right;"> <input type="button" value="Btn A" style="float: left;"> <input type="button" value="Btn B" style="float: left;"> </div> <div class="clear"></div> </div>
However, I don't know if it's valid or not because floated elements are supposed to have specified widths.
After I mentioned to the users of my machine that I have implemented a periodic checker to lower the priority of compilation processes so that the other processes remain responsive, a user took up the challenge to show that my effort is futile.
He launched 50 (AV) sweeps simultaneously. It did slow the machine
down — the load was 20+ — but foreground processes still remained
responsive. (I added sweep
to the list of programs to monitor.)
The next day, I implemented a more general-purpose checker that used the ratio of cpu time over elapsed time:
0.3% may seem very low, but only a handful of processes (out of 300+) exceed it.
I didn't use the cumulative cpu time (including child processes), but I will modify the script to use it.
(And when I do so, I'll need to exclude the shells — otherwise any program the user started will be niced!)
At the PC show, I saw 32" LCD TVs going for $700, 37" going for $900 and 42" going for $1,100. It's obvious where the sweet spot is.
Seized US bonds are worth US$ 134.5 billion. The whole affair touches a number of economic and political issues. For some the resignation of Japan's Interior minister might be related to it.
There have been new developments with regards to the story of US$ 134.5 billion in US government bonds seized by Italy's financial police at Ponte Chiasso on the Italian-Swiss border, which AsiaNews reported four days ago. News about it initially made it to the front page of many Italian papers, but not of the international press. Since yesterday though, some reports have published by English-language news agencies. And some commentators are starting to link the story to reports in US press dating back to 30 March.
On that date the US Treasury Department announced that it had about US$ 134.5 billion left in its financial-rescue fund, the Troubled Asset Relief Program (TARP), whose purpose is to purchase assets and equity to buttress companies in trouble. The existence of such means that the Obama administration may not have to go to Congress for additional funds, something which is especially important since many lawmakers have vowed to oppose any requests for more money.
At the same time, Japan's Kyodo news agency has reported that the resignation of Japan's Interior Minister Kunio Hatoyama might also be related to the Ponte Chiasso affair. Officially the minister quit as a result of a row over who should head the state-owned Japan Post, but some sources have suggested that such a scenario is not very plausible since Mr Hatoyama was Prime Minister Taro Aso's main ally in his rise to the prime minister's office, and is especially unconvincing since the ruling coalition government has to face elections in just two weeks time. Indeed there are many reasons to connect the Ponte Chiasso incident to the minister's resignation.
First of all, the men carrying the bonds had a Japanese passport. Secondly, they were not arrested. Under Italian law anyone in possession of counterfeit cash or bonds worth more than a few tens of thousands of euros must be arrested. By comparison the value of the seized counterfeit bonds is equal to 1 per cent of the US Gross Domestic Product (GDP). Thirdly, how the seizure took place is worthy of a Monty Python movie — two well-dressed Japanese men carrying a briefcase travelling in a local train usually used by Italian manual labourers who commute to Switzerland for work had as much chance to go unobserved as two European businessmen travelling in the Congo.
For AsiaNews the incident raises several questions. For example, why did Italy's press, of every stripe, first give the matter great visibility, only to drop it as quickly? Also, if we are to assume that the bonds are real, why were they in Italy on their way to Switzerland? If these were the unused TARP funds why would they be in US Federal Reserve denomination? Would it not have been better to wait to see how they would be used before the bonds were issued? If they are authentic and owned by a foreign state, why were they not transported in a diplomatic bag, which cannot be inspected at customs? And what will the Italian government do insofar as the issue represents an offence under Italian law? Will it impose a fine of 38 billion euros, and run the risk of a row with an ally, or return the money without any penalty to the rightful owner and show the world that Italy is some kind of banana republic, a semi-colonial protectorate that violates its own laws and constitution?
Whatever the case may be, for Italy's Prime Minister Silvio Berlusconi it is a heavy burden to bear, given the legal and criminal consequences he might face.
The only people who come out of it well are Italy's tax cops, reason for them to show off their success on their website.
Reality is stranger than fiction sometimes. If you make this up, no one will believe you!
I think the simplest explanation is that these are fake. But at times like it, you'll never know.
(Note that even if the notes are real, the US (SEC?) is likely to declare them as fake to avoid any hassle and just issue new ones to the original owner.)
The first two matches on Google: JavaScript Lint and JSLint.
JSLint does not allow single statements where applicable — and there is no option to turn it off. This gives a warning:
if(a) b();
Strict is good, but this goes into pedantic territory. I remembered I
turned on the pedantic option for a C compiler once and it disallowed the
//
comments. I never used the option again. (Nonetheless, after
that experience, I did not use //
again in C code.)
JavaScript Lint also gives this warning, but it can be turned off. With it off, I can look at the other warnings without having to wade through a ton of senseless ones.
The most common warning is of course this:
lint warning: comparisons against null, 0, true, false, or an empty string allowing implicit type conversion (use === or !==)
Some are very trivial to fix. Instead of
if(list.length == 0) ...
I just change it to
if(list.length === 0) ...
The most problematic is when I need to test for null
:
if(id != null) ...
If it is a return result and the API states whether null
or
undefined
is returned, then I check for that directly.
I don't want to change the code to
if(id !== null && id !== undefined) ...
when both are allowed.
I get these two warnings:
lint warning: unreachable code warning: function fn does not always return a value
because I place my inner functions last:
function fn() { ... inner_fn(); var x = ...; ... return x; function inner_fn() { ... } }
I just changed to this style!
I get this warning
warning: redeclaration of var i
for for
loops:
for(var i = 0; i < list.length; ++i) ... ... for(var i = 0; i < list2.length; ++i) ...
This is the only time I allow variables to be redeclared.
I get this warning
lint warning: unexpected end of line; it is ambiguous whether these lines are part of the same statement
when I do this:
$("#id") .addClass("class") .show();
Nope, not going to change.
I looked through JavaScript Lint's options and I'm sure I will hit these false warnings:
increment (++) and decrement (--) operators used as part of greater statement the default case is not at the end of the switch statement
There's no reason for default
to be at the end. If it is
standalone, I put it at the end. If it's combined with another case, then
it probably won't be at the end.
Linting is a good idea. It already caught two bugs even for my short
Javascript code. I omitted a var
for a variable declaration, so
it became a global variable. In the other case, I renamed a function, but
forgot to rename some of the callers. Oops.
SQL is very prone to this, because query strings are usually constructed straight from the user input:
$query = 'SELECT * FROM user_tbl WHERE name = '$name' AND pw = '$pw'";
What if a hacker enters his password as ' OR ''='
? The query
string becomes
$query = 'SELECT * FROM user_tbl WHERE name = '' AND pw = '' OR ''=''";
Because AND
has higher precedence than OR
, the
WHERE
clause becomes a true-expression and the query will select
all the rows.
In SQL, you have to escape all strings by calling
mysql_real_escape_string()
. It escapes all the special characters
to make them normal characters.
For numbers, just use intval()
to make sure they are numbers.
Invalid inputs are converted to 0.
I try to watch out for this, but it is very error-prone because I have to watch out for every SQL query.
HTML is also very prone to this, especially when using
innerHTML
.
Suppose we want to display a user input for confirmation:
var usr_inp = $("#user_inp").val(); $("#confirm_msg").html(usr_inp);
What if the user enters <b>test</b>
? We will
display test. To display as <b>test</b>
, we
need a entityify
function:
var usr_inp = $("#user_inp").val(); $("#confirm_msg").html(entityify(usr_inp));
Again, this is very error-prone because you got to watch out for every use
of innerHTML
.
(Note that $.text()
is safe, but people seldom use it.)
Both JSON and XML require special characters to be escaped. JSON requires only ' or " be escaped, because all the data are in strings and only these are special within strings. XML requires < > & be entity-encoded. It is also usual to encode ' and ".
This is easier to implement because it can be done at the data layer. The rest of the App does not have to know about it.
The fundamental reason why injection attacks work is because we construct the whole command as a string and allow seemingly normal characters to have special meaning.
If this function exists, it would stop most injection attacks:
sql_printf_query('SELECT * FROM user_tbl WHERE name = %s AND pw = %s;', $name, $pw);
Windows tells me I am not able to disconnect my portable HD because a program is still accessing it. That's good. But why doesn't it tell me which program so that I can close it?
It happens so often to me that that it is not due to an app that I run. I can close every app and Windows still give me this error message. That's frustrating.
Now, I just close the obvious apps (that I used to access the drive), kill Explorer and try again. If it doesn't work, I just disconnect the HD anyway.
I know there is a Process Explorer utility that can show which process is holding a handle to the drive. Let's hope it works on Vista.
$.extend()
is useful, but we cannot use it if we want to
modify the original object.
I created a new function $.fillIn(obj, toFillInObj)
that does
this. It will take values from toFillInObj
if it is absent from
obj
. It only copies enumerable properties and will deep copy
objects and arrays.
It is used like this:
$.fillIn(opts, defaultOpts);
Now this works properly:
function fn(opts) { $.fillIn(opts, { v: 2 }); alert(opts.v); // displays 1 ... opts.v = 3; } var opts = { v: 1 }; fn(opts); alert(opts.v); // what is displayed?
var arg = arg || default_value;
Very useful for optional parameters.
cmd && cmd();
Very useful for optional callbacks.
opts = $.extend(default_opts, opts);
opts
will use the values from default_opts
unless they are defined in opts
. With this, there is almost no
need to check for nulls for optional parameters. Very nice.
I found a gotcha for $.extend()
almost immediately after
using it.
function fn(opts) { opts = $.extend({ v: 2 }, opts); alert(opts.v); // displays 1 ... opts.v = 3; } var opts = { v: 1 }; fn(opts); alert(opts.v); // what is displayed?
The bane of all Windows developers: a new Windows version. Worse, it's pre-release and you got to find an up-to-date PC to install it on — effectively wasting it — and you can't spare any.
Only my notebook is sufficiently up-to-date to install Windows 7. (It is my only computer that can fully support Vista.) I don't want to waste it; I need it for my work!
What I'm going to do is to swap in my old 80GB backup hard disk and install Windows 7 on it.
In preparation for this, I have copied the contents to my 1 TB Master Repo hard disk. It struck me that 1 GB is like 100 MB in the past. 100 MB used to be medium size. Now 1 GB is considered medium size.
Updated (11/6): I'm not able to do so because I just found that the notebook uses 1.8" HD!
I will make a minor change in my HTML coding. From
<link rel="stylesheet" href="style.css" type="text/css"> <script type="text/javascript" src="script.js"></script>
to
<link href="style.css" rel="stylesheet" type="text/css"> <script src="script.js" type="text/javascript"></script>
I'm already putting the id and src attributes first for other HTML elements, so this is just to make the syntax consistent:
<img src="picture.jpg" alt="desc"> <input id="signin_btn" type="button" value="Sign In">
Javascript does its lookup at the point of execution, not parsing. What does it mean?
function fn() { var a = b; } var b = 1; fn();
This works because b
has been defined when fn()
is called. Now, this doesn't work:
var a = b; var b = 1;
a is undefined because b has not been defined at that point.
C-like languages use all three kinds of bracketing: ( ) (parenthesis), [ ] (brackets) and { } (braces): ( ) is very general-purpose, [ ] is used for arrays and { } is used for compound statements. C++ and Java even have a fourth kind, < > (angle brackets), for templates.
(Note that regular expressions use / / (slash) for bracketing, but this is very specific.)
There should only be two: ( ) and { }. We should be able to do away with [ ] and < >.
Why is an array so special that it uses a different bracket:
list[i]
instead of list(i)
? If we think of an array
as a mapping function, then it should use the ( ) notation.
I now prefer to use a single type of brackets because it's easier to change implementations. With two types of brackets, once a variable is defined to be an array, it is difficult to change it to something else. You can let the compiler flag all the errors and fix them, but you cannot do so reliably for an interpreted script.
C++ allows you to overload the [ ] operators, so you can switch implementations while still using the array syntax. However, it is not truly transparent unless you are willing to add a lot of complexity. And then it may break unexpectedly. I won't go there.
Just to note, BASIC uses ( ) for arrays and at that time, I didn't like it because I couldn't easily differentiate between arrays and functions. Now I think it's a virtue.
C-like languages always require a function call to include the ( ).
fn
refers to the function, fn()
calls it.
Pascal and BASIC requires ( ) only for functions with arguments. I like it because the change is transparent when a variable is changed to to a function.
In C, as a variable:
a = b + c;
If c
becomes too complex and must be computed:
int c() { return ...; } a = b + c();
In Pascal and BASIC, the expression remains unchanged.
In C, there are only functions, no procedures. What are procedures? They are simply functions that do not have a return value. Unfortunately, there are real differences in Pascal and BASIC.
To call a function in Pascal:
a = fn(x, y, z);
You cannot ignore the return value. This is a syntax error:
fn(x, y, z);
IIRC, some Pascal implementations allow this (via a non-standard setting).
To call a procedure in BASIC:
fn x, y, z
Notice the parenthesis is omitted. This must be on the what-not-to-do list in programming language design because it is very difficult to change a procedure to a function and vice-versa.
It is considered a good programming practice to have a single entry and exit point. A function can only have one entry point, but can have multiple exit points. I somewhat agree with the single entry point (but see later), but disagree with the single exit point.
I chanced upon this code that provides a clear illustration. (I have modified the indentation and spacing to my own style.)
function typeOf(value) { var s = typeof value; if(s === 'object') { if(value) { if(typeof value.length === 'number' && !value.propertyIsEnumerable('length') && typeof value.splice === 'function') { s = 'array'; } } else { s = 'null'; } } return s; }
One exit point, remember? I would do it this way:
function typeOf(value) { var s = typeof value; if(s !== 'object') return s; if(!value) return 'null'; if(typeof value.length === 'number' && !value.propertyIsEnumerable('length') && typeof value.splice === 'function') return 'array'; return s; }
More obvious, don't you think?
For more complicated code, it is usual to do some post-processing before returning. In this case, we cannot have multiple exit paths, or can we?
function fn() { do { ... if(cond_a) break; ... if(cond_b) break; ... } while(false); // do post-processing }
We just use a fake do-while loop do allow us to break out at any time. There is a gotcha with a fake do-while loop: it does not allow continuing. This fails:
do { ... if(cond_a) break; ... if(cond_b) { // fix it and try again continue; } ... } while(false);
It does not work because continue
tests the loop condition.
If you need to continue, you need a real while loop:
while(true) { ... if(cond_a) break; ... if(cond_b) { // fix it and try again continue; } ... break; }
Just remember the final break
!
It may not be obvious, but overloading can be viewed as multiple entry points. After all, the functions just massage the parameters a little and then call the real workhorse function.
Javascript allows overloading if you use the arguments
array instead of the declared parameters:
function fn() { if(arguments.length == 1) { fn_a(arguments[0]); } else if(arguments.length == 2) { fn_b(arguments[0], arguments[1]); } else { ... } }
I like function overloading because I think it simplifies the module's interface.
All Javascript functions have an implicit this
variable. What
does it point to? Good question.
For a free-standing function, this
always points to
the "global object":
function fn_a() { // this points to the global obj }
For a method, this
points to the object:
var obj = { fn_b: function() { // this points to obj } }; obj.fn_b();
The confusing part is, if obj.fn_b()
calls
fn_a()
, where does this
points to?
Big gotcha moment: it points to the global object. This happens even for inner functions, which is actually the confusing part:
var obj = { fn_b: function() { // this points to obj ... fn_c(); function fn_c() { // this points to the global obj } } }; obj.fn_b();
There are two ways to work around it:
var obj = { fn_b: function() { var me = this; // use me instead of this to avoid confusion ... fn_c(); function fn_c() { // use me instead of this } } }; obj.fn_b();
The other way:
var obj = { fn_b: function() { // this points to obj ... fn_c.call(this); function fn_c() { // this points to obj } } }; obj.fn_b();
A big gotcha for programmers who come from C-like languages: Javascript does not have block-level scoping.
function fn() { var a = 1; if(...) { var a = 2; ... } else { var a = 3; ... } alert(a); }
a
is either 2 or 3.
This is why I always declare a variable at the outermost block that it is used and then never declare it again in an inner block. If you redeclare it, other people may be misled into thinking that there is block-level scoping. Make the code look correct. No one is confused over this:
function fn() { var a = 1; if(...) { a = 2; ... } else { a = 3; ... } alert(a); }
Just to test your understanding even more:
var x = 1; function fn() { alert(x); // what is displayed? var x = 2; alert(x); // what is displayed? } fn();
Yes, this is a trick question.
Javascript is one strange beast. The syntax looks like C, but it is more functional than procedural. Well, you can use it as a procedural language, but you are missing 2/3 of the power.
Here, I'm just going to show one of the functional powers: how to use closure to define private state variables for a function. State meaning that the values are remembered across function calls.
(It is a good thing Javascript calls it a closure. If it is called lambda, people's eye will gaze over.)
We already know how to define helper functions by using inner functions:
function fn(...) { ... hlp_a(); ... hlp_b(); function hlp_a() { ... } function hlp_a() { ... } }
This is good because it reduces namespace pollution.
Inner functions seem natural, but the most notable exception is C. Apparently it is not so straightforward to implement nested functions for static-typed languages.
There is a special kind of closure in Javascript:
(function() { ... }) ();
The original intent of this construct is to define a literal function and execute it immediately. I don't know who discovered it can be used to define a closure, but it's very neat.
Even if we don't understand how it works, most of us use it today to reduce namespace pollution: everything defined in a closure is private to it.
Traditionally, the only way is to define state variables is outside of the function:
var state_a = ...; var state_b = ...; function fn(...) { state_a = ... state_b = ... }
If no other functions use the state variables, we should put them in the function. But how?
Before we continue, the reader should know these two are equivalent (well, almost):
function fn() { ... }
and,
var fn = function() { ... };
Let's see what we can do with a closure:
var fn = (function() { var state_a = ...; var state_b = ...; return function(...) { state_a = ... state_b = ... }; }) ();
Note how the closure returns a function to be assigned to fn
.
That's our real function. We are also able to define as many state variables
as we want in the closure. The closure will exist as long as it is binded to
fn
, so all the variables defined in it will exist too.
Neat!
I have known this trick for some time, but I always resist using it for one reason or another. No longer. Now I think it helps to keep related data in one place, so it is a very useful technique.
People seldom do this, for no better reason than it being very verbose,
for(var i = 0; i < obj.list.length; ++i) { obj.list[i].item1 ... obj.list[i].item2 ... obj.list[i].item3 ... }
It is usual to define an alias to deal with array elements:
for(var i = 0; i < obj.list.length; ++i) { var elm = obj.list[i]; elm.item1 ... elm.item2 ... elm.item3 ... }
Any changes to elm will affect the original array. Everything goes well, until you need to remove elm from the list:
for(var i = 0; i < obj.list.length; ++i) { var elm = obj.list[i]; ... elm = null; }
obj.list[i] is not set to null.
This is not a Javascript problem. I have stepped on this bug in C/C++ too. It will happen whenever we use an alias.
Take this test to see if you truly understand what is going on:
var list_a = [ 1, 2, 3 ]; var list_b = list_a; list_a[0] = 101; alert(list_b[0]); // what is the value? list_b[0] = 1001; alert(list_a[0]); // what is the value? list_a = [ 11, 12, 13 ]; alert(list_b[0]); // what is the value? list_b[0] = 1; alert(list_a[0]); // what is the value?
A mental block for programmers who come from a strongly-typed language to a typeless one is that they assume a variable can have only one type for its lifetime: once a string, always a string.
This is not true.
Most people force function parameters to have a specific type. This is a good practice, but we have to be flexible sometimes.
Suppose we want to manipulate either a single element or a list. In C, we have to write two functions:
void process_one(elm_t elm); void process_list(elm_t list[], int size);
In C++, Java and other languages that support overloading, we can use one name, but we must still write two functions:
void process(elm_t elm); void process(list_t<elm_t> &list);
In Javascript, we can write just one function:
function process(list) { if(!$.isArray(list)) list = [ list ]; for(var i = 0; i < list.length; ++i) { ... } }
We make the input an array if it's not. This allows us to simplify our interface.
Most programming languages allow only one return value. However, we usually need to return two values: one to indicate if the function is successful and if so, the actual result.
In most cases, we are able to return a special value to indicate the function has failed. For example:
char *process(...);
If process()
fails, it returns NULL, which is a special
value.
What if the return value is always valid, how do we indicate an error then? That's when the code starts to look unnatural:
bool process(char *out_str, ...);
In a typeless language, we just return the result, or false
if it failed:
function process(...) { ... if(some_err) return false; ... return result; } var result; if((result = process(...)) !== false) { handle(result); }
Note that we have to use the === or !== operators to avoid type coercion.
Most modern languages also allow you to return multiple values — effectively.
In Javascript, because it's so easy to create a literal object/array, we just do this:
function process(...) { ... return [ err, out_str ]; } var result = process(...); if(result[0] == 0) { handle(result[1]); }
PHP supports literal return list, so it looks neater:
function process(...) { ... return array($err, $out_str); } list($err, $result) = process(...); if($err == 0) { handle($result); }
After using mixed types for a while, it makes you wonder if strongly-typed languages can be more flexible so that we don't have to write so much boilerplate code to work with/around the types.
Java is the king of strongly-typed languages: it enforces types very strictly. Pascal was the old king (it was as strict as Java), but no one uses Pascal anymore.
Suppose we have an array [ 1, 2, 3 ] that we want to emit as "1, 2, 3".
How do we write the function? (Let's ignore Array.join()
for
illustration purpose.)
function emit(list) { var s = ""; for(var i = 0; i < list.length; ++i) { s += list[i] + ","; } return s; }
Oops, we get "1, 2, 3," instead.
This is such a common problem with list generators that many programming languages allow a trailing comma for their lists, so [ 1, 2, 3, ] is valid.
Strangely, IE does not allow it and will ignore the entire Javascript file silently. This is a common problem for programmers who use FireFox as their primary development browser.
Even if the programming language allows it, I prefer not to have the trailing space for aesthetic looks, so the code looks like this:
function emit(list) { var s = ""; for(var i = 0; i < list.length; ++i) { s += list[i]; if(i < list.length - 1) s += ", "; } return s; }
It will output "1, 2, 3".
Problem solved? For lists with known lengths, yes. What if we are parsing a file, say an XML file? We want to convert these,
<items> <item>1</item> <item>2</item> <item>3</item> </items>
to [ 1, 2, 3 ]. The problem is, we don't know when
</items>
will show up.
The solution is simple: we emit the separator first.
function emit(xml_list) { var s = ""; var emit_sep = false; while(true) { var item = get_next_item(xml_list); if(!item) break; if(emit_sep) s += ", "; s += item; emit_sep = true; } return s; }
To handle all cases consistently:
function emit(list) { var s = ""; for(var i = 0; i < list.length; ++i) { if(i > 0) s += ", "; s += list[i]; } return s; }
This is a mental block. Because comman appears behind the element, most people think of generating it after the element. No, we can generate it before the element (for the previous element).
What does this output?
function x() { alert("1"); } function a() { x(); function x() { alert("101"); } } a();
Is it 1 or 101?
The answer is 101, because Javascript does the lookup dynamically. If you understand that, the next one is a piece of cake:
function x() { alert("1"); } function y() { alert("2"); } function a() { x(); function x() { alert("101"); y(); } function y() { alert("102"); } } a();
For a long time, I put functions before they are used:
function a() { function x() { alert("101"); } x(); }
I will now move them to the back so that the code will be easier to read.
I use the camel coding convention for my work. An identifier is written
this way: thisIsAnIdentifier
. This style is standard for Java
and is pretty popular for C-like languages. I don't like it because closely
related variables look different: list
and
otherList
. Both are lists, but one is capitalized and the other
is not.
I use underscores for my own code: this_is_an_identifier
.
This is the standard C style and I just decided to adopt it a long time ago.
Before that, I used to use this: ThisIsAnIdentifier
. This is
Pascal style, but many Windows C programmers use it too, because Win32 API
uses this style. I like it because it saves space!
Many people use i++ because that's how they learn it:
for(int i = 0; i < count; i++) { .... }
This irritates me to no end. I like ++i:
for(int i = 0; i < count; ++i) { .... }
C++ programmers will know why: ++i is more efficient than i++, especially for classes. However, it doesn't really matter for intrinsic types (and other languages, where you can't overload the ++ operator anyway).
After so many years, I finally decided to follow a more conventional coding style.
My old style:
if(something) { ... }
My new style:
if(something) { ... }
This will apply to all C-like languages that use braces for compound statements. It may seem very subtle, but asking people to change their coding style is like asking them to change their handwriting — it's just not going to happen.
I like to place braces on their own lines because they look logically consistent and it is easy to match the braces visually. Nowadays, many editors automatically match the braces for you.
Some people like to have a lot of spaces and align the closing braces with the block:
if ( something ) { ... }
I don't like that.
I like to keep most of my code within 80-columns. (Even HTML. Right-click and view the source.) This is a self-imposed relic from the past. Thus, I do this for my C functions:
void fn() { ... }
Many people who read my code complain that it's hard to tell where functions start and end. I don't think it's really a problem because editors are really smart these days: they are able to show the list of functions automatically.
However, I'm going to change my style to make it consistent with the above:
void fn() { ... }
I will still use some non-standard indentation to save space. A switch-case looks like this:
switch(something) { case 'a': ... break; case 'b': ... break; default: ... break; }
I do it this way:
switch(something) { case 'a': ... break; case 'b': ... break; default: ... break; }
There's a similar situation for Java/C++ classes:
class a { public void fn() { .... } }
I prefer to do this, at least for Java classes:
class a { public void fn() { .... } }
C++ classes don't suffer from this because most functions are not inlined.
I don't like deep nesting:
for(var i = 0; i < list.length; ++i) { if(list[i].name == something) { if(list[i].value == somethingElse) { ... } }
I find it hard to read. I prefer to do this:
for(var i = 0; i < list.length; ++i) { if(list[i].name != something) continue; if(list[i].value != somethingElse) continue; ... }
I use 2-spaces for interpreted languages (Javascript, PHP and Perl) and 4-spaces for compiled ones (C, C++, Java). This is the general coding convention for these languages. Don't ask why it turned out this way.
Tabs should be outlawed. The reason is simple: spaces always creep into the code and mess up the indentation. Once some of the code is messed up, expect the broken windows phenomenon to take over.
Even so, it is almost impossible to get someone to stop using tabs.
This is also why every file should have an owner. The owner is expected to take care of his file, even though it may be modified without his knowledge. (Big workgroups require you to seek permission from the owner before you can modify the file, but smaller workgroups usually allow anyone to edit the file.)
One thing that leaves a poor impression on me is trailing spaces. It's like the person didn't take care to edit the file properly. And it's not as if the person has to do it manually — the editor should be able to trim trailing spaces automatically.
This is not a big deal because if I ever edit the file, my editor will remove the trailing spaces automatically.
I went home an hour early last Friday because it was Family day. I was caught in the causeway jam. Now that Singapore's most terrifying terrorist Mas Selamat has been caught, the jam is transferred to JB's side (JB always clears slower than Singapore). I was rather surprised. Okay, maybe everyone left early that day.
I went home an hour early again today (with 1-hour's leave, of course). The JB Customs is still jammed — at 4:50pm! Most are Singapore cars. I wonder how come these people can leave so early?
I'm always fascinated by the MMO concept: Massive Multi-player Online whatever, usually RPG (MMORPG). Massive meaning hundreds, at least.
Despite its decentralized look, everything happens on a fast server machine. This is obvious — you can expect rogue clients to pass you the wrong stats if they can do it. In other words, a MMO game is like a single player game that accepts its inputs over the network.
There are a few things to look out for:
The usual problem with the game economy is inflation. There are three main reasons:
The combination of limitless items and fixed prices is a problem.
NPC shops usually have fixed prices, regardless of the quantity of the items. This can be easily exploited by bots to make high-value items and then sell them. Thus, there must be some sort of demand-and-supply influence on the pricing.
It is not easy to model demand-and-supply properly. Suppose I buy a huge quantity of an item and cause the price to rise. Now, I resell all of them at the high price, making a guaranteed profit.
The value of an item should depend on its total quantity in the game world, not just those for sale.
You cannot have a player earn money when he is logged off. The most obvious is interest on deposits. If you do, even infrequent players can get a lot of money effortlessly. (Not that they will be rich, because everyone can have lots of money this way.)
This also means no stocks for regular NPC businesses. (However, you can still have stocks for quests.)
Note that you also cannot offer interest when the player is logged on, as this double-rewards frequent players (worse, which may be 24/7 bots).
Crafting is creating high-level items from low-level ones. This is mostly done right now: you need raw materials and a blueprint.
The resources used to construct the high-level item should still be counted as in-use until the high-level item is destroyed.
High-level items almost always require some rare resources, and that's where the problem starts: farming.
Hard-to-acquire resources are always in demand. As a result, some players will collect these resources and sell them for real cash.
Personally, I feel farming using in-game rules is legitimate. It's a time-vs-money situation. However, bots are usually involved to automate the process, if not outright exploitation of game engine flaws.
Other than farming, bots can be used to subvert the economy. For example, if NPC shops have a ration of items, bots can buy all of them as soon as they are available and then resell them on the black market.
Bots also mean you cannot really have any regional difference in prices. Bots can buy-and-sell to make a consistent profit.
The most important thing is that the economy must be closed, or at least close to it. Every single item must be accounted from the money supply.
An item has five stats: max quantity, current quantity (including unacquired), quantity held by players, quantity held by NPCs and quantity used as part of other items. The upper limit is to prevent runaway inflation (in items). Player hoarding will be dealt separately.
A new area means more money supply and hence inflation. This can be overcome by using a new currency.
There must be public ranking of the top 10% holders of each item. This allows for some transparency.
We can then define monopolies. We can consider a monopoly to be <3% of the population holding >33.3% of an item; and near-monopolies, <5% holding >25%. Monopoly taxes will be imposed. A holding tax will deter hoarding. A sales tax will deter further purchases. Once in effect, it will last for a fixed duration even after the monopoly has ended.
This has the nice effect that rare items are automatically taxed. (Not to mention their owner's identity revealed — just asking to be robbed.)
Items like weapons may be ranked by their effective power, due to their individuality.
Also note that money is also an item, so the top 5% may be taxed.
The NPCs must be accountable for the money that they use. There are a few types of NPCs: merchants, services and banks.
Merchants have an upper limit to spend. They will not buy any more if they run out of money. Periodically, they will trade with other merchants to spread out the items and get back money. Unlike the real world, merchants are not there to make a profit, but to enforce game prices. (There will always be a secondary player-to-player market.)
Services simply return money to the central supply, as these NPCs do not consume.
Banks allow the player to keep their physical money in one place and withdraw at other branches. They also provide loans that must be paid back with interest. The interest is calculated in game time, so the player is not penalized when he is logged off. The interest should be very low, though. Note that deposits do not have interest.
Banks can lend out more than what they have (fractional reserve banking), but they should try to be solvent. Even so, it is expected that banks will require bailouts from time to time.
Items should be loanable as well. This allows people to try expensive/powerful items once in a while.
Health potions, ammo, spells and crafting are obvious consumables. Since there is no need to eat and drink (nobody likes to model this), the only other main consumable is wear-and-tear.
Weapons, armour, clothes; they all wear out. The degradation is not linear. Normal clothes degrade to 70% fast, but is slow after that. Expensive ones degrade to 85% fast, but is slow after that. You get the idea. A vain player may want to keep his clothes above 95% all the time.
Every physical item, even spells, have weight and take up space. A player can only carry so many items. He needs to store the rest.
Physical items must be transported. Money can be wired, but the physical coins must be transported. (Unless there is a mint in the city.)
Transportation takes time — and it may not succeed. A transportation convoy may be attacked by monsters or robbers. This is a risk — as well as fortune.
Monsters should drop money for convenience. But why should monsters have money? Because they know players will come and look for them. And they want it because they gain experience/levels by killing players. It's like an AI player. Monsters do not magically have money, though. Their loot comes from their kills.
It will exist. Deal with it.
Allow every user to write a bot.
It is important to note that the economy is secondary to the original intent of the MMO game.
Another global financial crisis triggered by a loss of confidence in the dollar may be inevitable unless the U.S. saves more, said Yu Yongding, a former Chinese central bank adviser.
It's "very natural" for the world to be concerned about the U.S. government's spending and planned record fiscal deficit, Yu said in e-mailed comments yesterday relating to a visit to Beijing by U.S. Treasury Secretary Timothy Geithner.
The Obama administration aims to reduce the fiscal deficit to "roughly" 3 percent of gross domestic product from a projected 12.9 percent this year, Geithner reaffirmed today. The treasury secretary added that China's investments in U.S. financial assets are very safe, and that the Obama administration is committed to a strong dollar.
It may be helpful if "Geithner can show us some arithmetic," said Yu. "We need to know how the U.S. government can achieve this objective."
The deficit is projected to reach $1.75 trillion in the year ending Sept. 30 from last year's $455 billion shortfall, according to the Congressional Budget Office.
The U.S. needs a higher savings rate and a smaller deficit on the current account, which is the broadest measure of trade, or "another financial crisis triggered by a dollar crisis could be inevitable," the Chinese academic said.
The U.S. current account deficit fell to $673.3 billion or 4.74 percent of GDP last year from $731.2 billion, or 4.91 percent of GDP, the year earlier.
China is the biggest foreign holder of U.S. Treasuries with $768 billion as of March. Premier Wen Jiabao called in March for the U.S. "to guarantee the safety of China's assets." Central bank Governor Zhou Xiaochuan has proposed a new global currency to reduce reliance on the dollar.
Yu said U.S. tax revenue is not likely to increase in the short term because of low economic growth, inflexible expenditures and the cost of "fighting two wars."
China wants to know how the U.S. will withdraw excess liquidity from its financial system "in a timely fashion so as to avoid inflation" when its economy recovers, said Yu, now a senior researcher at the government-backed Chinese Academy of Social Sciences.
He questioned whether there would be enough demand to meet U.S. debt issuance this year.
Referring to the Federal Reserve "as the world's biggest junk investor," and to Chairman Ben S. Bernanke as "helicopter Ben," Yu said the Fed has dropped "tons of money from the sky since the subprime crisis."
"The balance sheet of the Federal Reserve not only has expanded like mad but is also ridden with 'rubbish' assets," he said.
You got to love Yu:
The treasury secretary (Geithner) added that China's investments in U.S. financial assets are very safe, and that the Obama administration is committed to a strong dollar.
It may be helpful if "Geithner can show us some arithmetic," said Yu. "We need to know how the U.S. government can achieve this objective."
The whole world is playing poker today! Let's see who blinks first.
Ben S. Bernanke is called Helicopter Ben and Tim Geithner is known as Turbo Tim due to the way they throw money around.
FILING an accident claim after his car skidded last August brought motorist Tan Boon Tong, 46, two shocks.
First, the repair bill hit $19,800 - for repairs done at a workshop authorised by his insurer NTUC Income, and when the car engine had been undamaged at that.
The second and bigger shock - when Income told him the premium for his two-year-old Suzuki Grand Vitara would now be more than $5,000, up from $970.
Mr Tan, a regional business manager, said: "I do understand that premiums will rise after a claim. I can even accept it if the increase is 100 per cent - but not by more than 400 per cent."
Up until the accident - on a slip road out of the Tampines Expressway on a wet day - he had had a spotless record in the past 10 years.
For that, he had been enjoying a no-claims discount of 40 per cent.
But now, he is in a bind, he said, because other insurers have refused to insure him upon hearing of the $19,800 claim.
It was an amount Mr Tan has disputed from the start.
He said several parts which the Income-appointed workshop listed as replaced were not changed.
These included the horn, reverse sensors and front tyres.
The open market value of the two-litre sport-utility vehicle is $19,500.
When Mr Tan raised his doubts over the repair bill with Income, he was told to confront the workshop himself.
Asked for a comment, Income - Singapore's largest motor insurer - said: "Our surveyors assess the damage and confirm the required repair work with the workshop to ensure that costs are kept low."
It defended the 400-per-cent premium hike, saying it was because Mr Tan's no-claims discount had been reduced from 40 per cent to 10 per cent; Income had also slapped a claims surcharge "of the highest loading" on his policy.
Income's general manager Pui Phusangmook said: "In keeping with industry practice, the premium payable by customers who have claims against their policy will be much higher than those who have an accident-free driving record.
"This is necessary to keep premiums low for safe drivers."
But Income plans to raise motor premiums by 15 per cent to 20 per cent this year.
And at least one policyholder has written to the press saying this insurer had jacked up his premium by more than 30 per cent, despite his clean record.
The Consumers Association of Singapore said it was "concerned with the manner and the extent of the increase", especially when the car was repaired at an insurer- authorised workshop.
"Such action on the part of the insurer may send the wrong signal that there is no difference between repairing the car at an authorised workshop or doing so at an unauthorised one," Case executive director Seah Seng Choon said.
Asked if a 400-per-cent premium hike was normal, General Insurance Association president Derek Teo, also AIG's executive vice-president, said it was inappropriate for him "to comment on our competitor's quotation, but I am sure there are reasons for the hike".
But an industry veteran said the rise was "clearly excessive" and "even highend cars do not attract such a premium".
He added that a $19,000 claim was "not very high", as injury-related claims can shoot past $100,000.
QBE Insurance chief executive Michael Goodwin also declined comment on the increase, but said Mr Tan was free "to shop around" for another insurer.
Mr Tan said he has gone to four other insurers but all turned him away.
With his policy expiring next month, he said he may have no choice but to pay up.
He said: "What can I do? I'm at the mercy of these people."
Never claim against your own insurance. Let's hear the insurer's side of the story.
I REFER to yesterday's report, 'High repair bill and premium hike vex driver', in which our policyholder, Mr Tan Boon Tong, claimed that his vehicle's $19,800 accident repair bill was excessive.
It carried three photographs which failed to reveal the full extent of the damage. From the photographs taken by the Independent Damage Assessment Centre (Idac), it is clear that the damage was severe and the bill reflects the repairs done.
Ubi Auto, which had provided the lowest estimate, carried out the repairs. It has been on our panel of workshops for five years and has consistently provided quality service.
The extent of damage to Mr Tan's vehicle reflects the details he provided in the accident report: 'I was travelling towards the slip road slowing down my speed. As I entered the slip road, my car lost control and went to the right side, hit the right side railing. After hitting the right railing, my car swerved to the left, hit the left railing. My car then spinned a few times before coming to a stop on the right side railing with the front portion of my car facing the oncoming vehicles.'
Mr Tan alleged that several vehicle parts, including the horn, reverse sensors and front tyres, were not repaired but billed. The sensors and tyres were in fact replaced. The horn was not replaced as it was disallowed by the surveyor and was excluded from the bill.
The article reported that other insurers have refused to insure Mr Tan because of his claim. It is normal for insurers to decline insurance on the basis of commercial considerations in such cases. We made an exception by providing him a quote to use as a benchmark to get better rates from other insurers if he wished.
Mr Tan's gross premium would have risen by 30 per cent upon renewal if no claim had been made. As he had made a claim, the gross premium increased by 65 per cent and our underwriters imposed a 110 per cent loading. His no-claims discount was reduced from 40 per cent to 10 per cent.
These translated to an increase of about 400 per cent. Four cars in 10,000 renewals saw an increase exceeding 400 per cent in the past year. All had poor claims records and posed underwriting concerns. Our aim is to keep premiums for safe drivers as low as possible.
Pui Phusangmook
Senior Vice-President & General Manager
General Insurance Division
NTUC Income
I'm surprised no one wants to insure him after just one claim.
Category | Jan | Feb | Mar | Apr | May |
---|---|---|---|---|---|
Basic | 1,061.55 | 985.02 | 2,574.12 | 685.04 | 987.35 |
Cash | 213.60 | 241.50 | 172.10 | 151.00 | 160.00 |
Vehicle | 258.35 | 307.97 | 577.60 | 1,886.78 | 146.74 |
Others | 116.30 | 875.00 | 308.15 | 391.94 | 66.00 |
Total | 1,649.80 | 2,409.49 | 3,631.97 | 3,114.76 | 1,360.09 |
A very low-expense month. I think this will be the lowest for the year.
After 68 years, comics teen makes his choice — but many think it's wrong
After 68 years of waffling, Archie Andrews has made his choice. It's the raven-haired heiress over the girl next door, Veronica Lodge over Betty Cooper.
Just eight days after Archie Comics announced that Archie would finally choose between his two high-school hotties, the word is out: Archie gets down on bended knee to present Veronica with his proposal and a ring while poor Betty looks on and wipes away a tear. Veronica replies to the proposal with a resounding, "Yes!"
The red-haired all-American boy's choice is likely to upset many Archie fans. Ever since news that Archie would get married broke, they have been filling the message boards at ArchieComics.com with their opinions on which girl should get the ring.
It's not even close. A strong majority feel wholesome Betty should get the nod over snooty Veronica.
"I hope it's Betty! I've read these comics for over 30 years and waited for the day he woke up and chose Betty," wrote fan Rachel.
"I think he should ask Betty," agreed reader Rob. "Veronica is too sophisticated and too richy rich for him. Betty is very laid back, sort of like the All American Girl she has always been. Betty has a big heart, she would make a great wife to Arch, and I would be disappointed if Archie chose Veronica! Good luck to Archie."
Among the minority who felt Veronica was the obvious choice was "archielover," who wrote: "OMG! Pick Veronica! He doesn't deserve Betty, he always makes her do all his chores and fix his car and help with his homework, while he treats Veronica like a little princess! I can't wait to see what happens :)"
But mind you, Issue 600 will deal only with the engagement. Whether Archie and Veronica actually get hitched as the story plays out over several issues remains to be seen. Given Archie's history of indecision, would it be unreasonable to assume that there are more plot twists ahead?
The news that Archie would get married leaked out on May 20 on the Internet and then in the official source of all things relating to celebrity gossip: the New York Post, which ran the shocking story under the headline: "BETTY OR VERONICA? ASK ARCH"
Archie Comics' official blog confirmed the epochal news, digging into its cache of exclamation marks to declare: "ARCHIE ANDREWS IS GETTING MARRIED!"
The blog continued: "The eternal love triangle is the cornerstone of Archie Comics for over 65 years — and the bane of Archie Andrews' existence! In fact, over the years it has not only defined the Betty/Archie/Veronica relationship, but has even threatened to dismantle it a time or 20."
According to the blog, the six-part story of Archie's long-delayed passage into adulthood will begin in Issue 600 of his eponymous comic book "Archie." It will ship on Aug. 12 and be available in comics shops on Sept. 8.
In the story, Archie has finally graduated from Riverdale High School; in fact, he's five years older and a college graduate. He may even have — gasp! — an actual job when he gets engaged.
But who's going to be best man? It can't possibly be that wiseacre Reggie, can it? Our money is on Jughead, who still can't lose that goofy hat he's been wearing since Bob Montana first drew it on his tousled head way back in 1941.
Writing the story for Archie Comics is comic book aficionado and movie producer Michael Uslan, who resurrected the Batman movie franchise in 1989. Longtime Archie Comics artist Stan Goldberg will illustrate the tale.
Speaking as someone who has read a lot of Archies, from the 70s to the early 2000s, I don't care. It doesn't matter whether you read 1 or 1,000 Archie comics. There is no continuity and the stories, being short and standalone, have no depth. The characters can be smart, dumb, kind, selfish or whatever depending on the story.
I would like the Archies better if there is a continuity to let the characters grow. The series can be rebooted every few years, like the Transformers. (Although it wasn't their intention to. However, the writers always paint themselves into a corner.)
Even though I used to hate it, I now think reboots are a smart idea. It allows the writers to experiment and carry the best ideas over different timelines. (And watch fans trying to reconcile the timelines, that's fun. :-))
I removed all thumbnails and will depend on the Auto Thumbnail Generator to generate the thumbnails as-needed.
With this change, I also made the default thumbnail size 267x200 from the previous 173x130. This is because screen resolutions are getting higher. However, I kept the thumbnails at 173x130 for some pages because it would disrupt the layout otherwise. I don't want to modify the webpages at all.
These pages use a table-based layout. They are hardcoded to show 4 thumbnails per row. When the thumbnails are 173x130, we need just 692 pixels to show the whole table properly. When the thumbnails are 267x200, we need at least 1068 pixels to do so. For me, a layout that requires over 800 pixels is bad design. If I ever re-code these pages, I will use DIVs so that the number per row can be dynamic. (Just like what I did on the home page.)
I also had to put in some special support for DVD images to convert them to the correct aspect ratio.
No problem, all can be solved by using the .htaccess file to point to different PHP scripts.
My blog currently uses 400x300 images. After my experience with the thumbnails, I now know it is important to future-proof the images. I will use 800x600 images from now onwards, but use the Auto Thumbnail Generator to resize them.
In an effort to make the homepage look less static, I made some of the images a slideshow. Hopefully, this will entice the visitors to click on them.
I did not make all images a slideshow because a new image means an additional download resource. The main page already requires 40 separate downloads and is 708kB (520kB zipped).
I now hide most of the text after the first five entries. This is done in steps so as not to freeze the browser.
jQuery really makes the job easy — I didn't have to modify the HTML at all!
I "abused" the file's c/m/a time when I implemented caching for thumbnails for my website.
The cached file will never be modified, so we can use ctime (creation time) as the last modified time. This allows us to compare with the original file to see if it is out-of-date.
When the cached file is out-of-date, we will recreate it and update the
ctime. PHP 5 doesn't allow us to change the file's creation file. It should be
possible to do so using the Win32's API SetFileTime()
. No
problem, I just delete the file so that the new file will have the correct
ctime.
We want to track the number of access and time of last access, so that we can delete starting from the oldest file when the cache is full.
I use the atime as the last accessed time. This is because Windows 2000 (onwards) update it whenever the file is accessed. (With a time lag, due to performance issues.) If we use it for other purposes, it will mess up our tracking. Vista SP1 turns this feature off by default. It doesn't matter to us. We update the atime itself, so it works whether the feature is on or off.
We then have the mtime (last modified time) to store any 32-bit value we desire. I use it as a simple counter.
There's one limitation: it doesn't work on a FAT/FAT32 partition, because they don't support the last accessed timestamp. Well, I don't care.
We now turn our attention to Unix.
On Unix, the ctime is the inode's last changed time. It is updated to the current time whenever the inode is changed, such as updating the timestamps. It cannot be set to any other value.
No problem, we just use it as our last accessed time. This is because we are going to update the timestamps whenever we access the file.
We use the atime as the last modified time. If the Unix system is configured to update the time, it will be wrong after every access. However, we restore the time every time we read the file.
We are free to use mtime as a 32-bit value, just like on Windows.
I went to West Mall on Sunday and was told to park at the HDB carpark opposite because it is free.
Free? When I came back, I saw a $30 fine staring at me.
In hindsight, I should have known. A HDB carpark that is opposite a shopping mall? It's usually not free. Still, I'm going to appeal, even if I have no grounds to do so.
Interestingly, my brother who was parked in the same carpark, but on a different floor, was not fined. Take it as you will.
Update (28/5): my appeal was rejected.
The Founders put the contracts clause in the Constitution for a reason.
The rule of law, not of men — an ideal tracing back to the ancient Greeks and well-known to our Founding Fathers — is the animating principle of the American experiment. While the rest of the world in 1787 was governed by the whims of kings and dukes, the U.S. Constitution was established to circumscribe arbitrary government power. It would do so by establishing clear rules, equally applied to the powerful and the weak.
Fleecing lenders to pay off politically powerful interests, or governmental threats to reputation and business from a failure to toe a political line? We might expect this behavior from a Hugo Chávez. But it would never happen here, right?
Until Chrysler.
The close relationship between the rule of law and the enforceability of contracts, especially credit contracts, was well understood by the Framers of the U.S. Constitution. A primary reason they wanted it was the desire to escape the economic chaos spawned by debtor-friendly state laws during the period of the Articles of Confederation. Hence the Contracts Clause of Article V of the Constitution, which prohibited states from interfering with the obligation to pay debts. Hence also the Bankruptcy Clause of Article I, Section 8, which delegated to the federal government the sole authority to enact "uniform laws on the subject of bankruptcies."
The Obama administration's behavior in the Chrysler bankruptcy is a profound challenge to the rule of law. Secured creditors — entitled to first priority payment under the "absolute priority rule" — have been browbeaten by an American president into accepting only 30 cents on the dollar of their claims. Meanwhile, the United Auto Workers union, holding junior creditor claims, will get about 50 cents on the dollar.
The absolute priority rule is a linchpin of bankruptcy law. By preserving the substantive property and contract rights of creditors, it ensures that bankruptcy is used primarily as a procedural mechanism for the efficient resolution of financial distress. Chapter 11 promotes economic efficiency by reorganizing viable but financially distressed firms, i.e., firms that are worth more alive than dead.
Violating absolute priority undermines this commitment by introducing questions of redistribution into the process. It enables the rights of senior creditors to be plundered in order to benefit the rights of junior creditors.
The U.S. government also wants to rush through what amounts to a sham sale of all of Chrysler's assets to Fiat. While speedy bankruptcy sales are not unheard of, they are usually reserved for situations involving a wasting or perishable asset (think of a truck of oranges) where delay might be fatal to the asset's, or in this case the company's, value. That's hardly the case with Chrysler. But in a Chapter 11 reorganization, creditors have the right to vote to approve or reject the plan. The Obama administration's asset-sale plan implements a de facto reorganization but denies to creditors the opportunity to vote on it.
By stepping over the bright line between the rule of law and the arbitrary behavior of men, President Obama may have created a thousand new failing businesses. That is, businesses that might have received financing before but that now will not, since lenders face the potential of future government confiscation. In other words, Mr. Obama may have helped save the jobs of thousands of union workers whose dues, in part, engineered his election. But what about the untold number of job losses in the future caused by trampling the sanctity of contracts today?
The value of the rule of law is not merely a matter of economic efficiency. It also provides a bulwark against arbitrary governmental action taken at the behest of politically influential interests at the expense of the politically unpopular. The government's threats and bare-knuckle tactics set an ominous precedent for the treatment of those considered insufficiently responsive to its desires. Certainly, holdout Chrysler creditors report that they felt little confidence that the White House would stop at informal strong-arming.
Chrysler — or more accurately, its unionized workers — may be helped in the short run. But we need to ask how eager lenders will be to offer new credit to General Motors knowing that the value of their investment could be diminished or destroyed by government to enrich a politically favored union. We also need to ask how eager hedge funds will be to participate in the government's Public-Private Investment Program to purchase banks' troubled assets.
And what if the next time it is a politically unpopular business — such as a pharmaceutical company — that's on the brink? Might the government force it to surrender a patent to get the White House's agreement to get financing for the bankruptcy plan?
Well said.
Quoted from a post online:
Chains that we can believe in. We live in really interesting times.
I came up with a simple caching strategy by using a dummy file's timestamps. This is not a new idea, of course. Using file attributes to track metrics is an idea as old as dirt.
(File attributes are nice to use because the window for race condition is much reduced.)
I use the dummy file's modified timestamp to track the number of accesses and the last accessed timestamp to track the last access time. Yup, there is no reason why the timestamp has to be a real time.
This is done on every cache access, so it has to be fast.
If the values exceed a predefined threshold, then we will do a cache sweep. I set the limits to 1,000 accesses and 15 minutes. By doing this, an attacker won't be able to flood the cache.
The cache sweep is very simple. If the total size exceeds the high watermark level, I delete the files to the low watermark level, starting from the oldest file.
If needed, I can use the dummy file's creation timestamp to keep track of a third metric — even on Unix. On Unix, it is the inode's last changed timestamp. It is updated whenever the inode is updated, such as when we update the file attributes. Even so, it can be used, but it is more tricky to do so.
Windows does not allow \ / : * ? " < > | in filenames. Why? Unix allows all of them if you escape them.
All characters should be allowed, even the directory separator. Actually, the directory separator should be a non-printable character.
In fact, at the application level, paths and filenames should not be strings.
Imagine an application using a file. Now we rename the path. Can the application still access the file? Maybe, but most likely not, because the path is now invalid. Why should this be the case?
It may be surprising, but OS/2 did this right ten years ago.
Webpages usually need to display images at a few sizes, typically two: thumbnail and the actual size. The thumbnails should be dynamically generated. Many websites are already doing this.
However, the thumbnails should not be regenerated all the time. They should be cached by the server automatically. Again, this is quite a common technique.
The basic process is very simple:
The trick is in the cache management. Most sample scripts never clear the cache, so it is always growing in size.
I'm still thinking what's the simplest way to implement the cache management. It is very simple with a database, but it seems like an overkill to me.
One of my habits when I start a programming project is to turn on the highest warning level. When I did so for a small program I wrote recently, I was shocked to see tens of warnings. Well, it's the same warning: unused function parameter. This is a useless warning, so I disabled it.
(Usually, the compiler will omit this check if the parameter is omitted. I don't like this syntax.)
What remained are very interesting:
I got this warning because I did something like this:
p.shi2_max_uses = -1; // type DWORD
DWORD is an unsigned type, so it's not possible to assign a negative value to it. Yet, most people prefer to use -1 rather than 0xffffffff, because -1 scales regardless of the int bit size, whereas 0xffffffff does not.
Should I cast it?
p.shi2_max_uses = (DWORD) -1;
In this case, it is probably okay, because p.shi2_max_uses is part of a published structure. If it's an internal structure, what if the field changes to another type? Then the cast won't make sense anymore.
I prefer not to typecast just to get rid of warnings because I don't like
to hardcode the type within the code. There is only one way to do it: use
int
for integer types as much as possible.
I got this warning because I did something like this:
jint len = strlen(path);
jint
is of the type long
and strlen()
returns a value of the type size_t
, which is defined as
unsigned int
.
You'll get this error whenever you try to assign an unsigned value to a signed variable.
Signed/unsigned do not mix. It's best to use unsigned if possible:
size_t len = strlen(path);
Unfortunately, it's not possible if the value is passed to another function that takes in an int. In that case, should I cast it?
jint len = (jint) strlen(path);
I don't, because I am violating the types. If you think about it,
why doesn't strlen()
return int
and save us the
trouble?
Even though unsigned gives us one more bit, it makes it hard to use the
variables without warnings. Thus, I only use unsigned very sparingly, usually
for interfacing with hardware. Unfortunately, it's hard to avoid as the RTL
uses it quite often. strlen()
is a good example.
This one is surprising:
HKEY localHive = reinterpret_cast<HKEY> (hive);
reinterpret_cast<>
is just a fancy way to typecast. It
is advisable to use these cast operators in C++ than using the old syntax,
which is too generic. The code is equivalent to:
HKEY localHive = (HKEY) hive;
(Interestingly, this doesn't give the warning.)
This warning is to detect potential 64-bit portability issues. HKEY
is a pointer to a structure, whereas hive
is an int. On a 64-bit
system, a pointer is 64-bits, whereas int remains as 32-bits.
I'm still deciding whether to disable this warning. I have no intention to port the code to 64-bit Windows, but this warning hints at possible problems.
Many problems in C are due to the language itself:
int
is defined too vaguelyFilm grain and digital displays do not mix: a film with grain just looks incredibly low quality, even if it is remastered. Some digital noise reduction is needed.
I am reminded of this when I compared a sample Macross R1 and R2 DVD still (at 1:1):
The R1 DVD is "re-mastered". But it is from a poorer source and all that can be done are to bump up the contrast, thicken the outlines and add some EE (edge enhancement) to increase the apparent sharpness.
To tell the truth, the R1 looked better than anything offered before. I thought it was pretty good for a 1982 cartoon. Until I saw the R2 remastered version. Wow, all the details are there, the color looks authentic (muted rather than disney-colors) and no EE!
But, the video is very grainy. Believe it or not, that made the video look apparently worse than the R1. (Apparently, because it is better.)
A fan is in the process of releasing the fansubs with English subtitles, at the rate of one episode per fortnight, so we can expect to get the whole series two years later.
Guess what, he is releasing it with the grain too!
The Japanese like to max out their video bitrate, so the R2 DVDs are likely to be encoded at 9.8Mbps. The fan encodes the video at 720x480 (DVD res) using H.264 at 3.8Mbps. H.264, being more efficient than MPEG-2, should be a near-lossless transcoding at that rate.
(Now, I'm glad that the fansub have the grain intact. It's not easy to degrain because most people set the threshold too high and destroy the details.)
I'm too poor to afford the 39,900 yen asking price for the remastered DVDs, so the fansub will have to serve as my source. I'm going to filter away the grain and lower the bitrate. 3.8Mbps is insane. With an ultra-clean video, 600kbps should be sufficient at 512x384. The low-resolution doesn't hurt. Macross is a low-budget cartoon, so it is not very detailed to begin with. (Anime used to favour motion over details. Nowadays, many anime are a series of detailed stills.)
If anyone has the R2 DVDs, I can extract the subs from my R1 DVDs to create super-R2 DVDs, or encode a fansub — without the grain.
Zip files are quite well-integrated into Vista. They are almost like folders now. But they are not there yet.
Why do I say that?
If you have a zip file of pictures, you can browse through them as if you are browsing through a folder of pictures. Unfortunately, the illusion falls apart when you have a zip file of zip files of pictures. It works for the first level only.
Why? It should work, and it should work for all kinds of compressed/archive files, such as rar, gzip and tar.
Experts' suggestions: Each road here should have no more than two bus services, and commuters should be encouraged to make transfers even if it is a 'pain'.
This will increase the connectivity and frequency of buses, said Dr Paul Barter, Assistant Professor at the Lee Kuan Yew School of Public Policy.
The transport policy expert is also in favour of fewer direct bus services, a move that probably will not go down well with commuters here.
'If you have three start and three end points, a direct system would need nine bus services. But with a central node where commuters transfer, you need only three services,' he pointed out.
In Bogota, Colombia, commuters prefer buses to trains.
The bus system, called the TransMilenio, consists of numerous elevated stations in the centre of a main avenue.
A dedicated bus lane on each side of the station allows express buses to pass through on one side without stopping, while regular bus services stop on the other side of the station.
Speaking at yesterday's forum, former mayor of Bogota Enrique Penalosa said buses can serve commuters as efficiently as trains. In some cases, buses may even be more efficient and operate at a fraction of the cost of a subway system.
Singapore's plan for trains and buses: The Government will double the rail network from 138km now to 278km by 2020.
Improvements in bus services are planned too, with the Land Transport Authority (LTA) taking over the central planning of bus routes from the two rival operators later this year.
By next year, the penalty for making transfers will also be completely removed to encourage commuters to make more bus-train-bus connections to get to their destinations.
Experts' suggestions: Pedestrian and bicycle paths form the backbone of Bogota's transport network. Since building these paths, the number of cyclists in the city has shot up tremendously.
To encourage more people to cycle, Mr Penalosa suggested that bicycles be given priority and protection on the roads. He added that bicycle spaces should be made available in carparks.
Dr Barter believes that Singapore 'does not know what it is doing when it comes to bicycles' and should ask for help from experts in the Netherlands where there is an extensive network of cycling tracks and many cyclists.
'If we do this well, people in suits will ride bicycles,' he said, adding that the weather here is not a deterrent to cycling as he sees 'hundreds' of bicycles parked outside the MRT station in Tampines.
This is why he believes that the Park and Ride scheme, which encourages car owners to park near an MRT station and hop on a train, should be scrapped in favour of one that promotes cycling.
Singapore's plan for bicycles: Pasir Ris, Sembawang, Taman Jurong, Tampines and Yishun will get about 10km of cycling tracks each. More bicycle parking facilities will also be built at selected MRT stations.
Experts' suggestions: Certificates of Entitlement (COEs) should be valid for a fixed distance, say 50,000km, rather than for 10 years, suggested Dr Barter. This 'pay as you use' approach would discourage car ownership in Singapore.
'When motorists pay such high prices for their cars, they will instinctively want to use them as much as possible till their COEs expire,' he explained.
A distance-based charge would remove the urge to maximise the use of their cars.
He also suggested increasing parking charges to reflect the value of real estate in the area rather than having flat rates for public parking, regardless of whether the lot is in the city centre or suburb.
Singapore's approach to cars: To control the vehicle population, the number of COEs available is linked to the number of cars scrapped.
Recently, the Government cut the COE supply in a bid to slow down the growth of the vehicle population.
There is also a gradual move away from ownership taxes towards more usage charges, as can be seen in the extension of the Electronic Road Pricing network.
The suggestion for distance-based COE is not new. But I'm sure LTA will make it both distance and time based. 200k km or ten years, whichever comes first.
I use iframe
in my blog and it struck me that search engines
used not to follow through the iframe. It seems now they do, because people
use it to hide things.
Search engines still don't parse JS files though, so if you store the contents in the JS and render them dynamically, they will not show up in searches.
I have many dynamic pages that do just that. If I ever get around to
reimplement them, I will store the data in the HTML as div
, using
some sort of "micro-format".
The Javascript object format is a natural data format:
{ name: value, name2: value2, name3: value3 }
The elements can then be accessed like a structure: elm.name, elm.name2. However, it requires the names to be repeated. Thus, I decided to use an array instead:
[ value, value2, value3 ]
JS does not require elements of an array to be of the same type. (This is a very powerful feature that cuts both ways.) However, the elements are not as descriptive: elm[0], elm[1].
Later, I use a function to create the objects to get the best of both worlds:
new Obj(value, value2, value3)
The next step is to use a div
:
<div class='elm'> <div>value1</div> <div>value2</div> <div>value3</div> </div>
It's more verbose, but it's more important for the contents to be searchable. It is very easy to use jQuery to convert the DIV to a JS object.
Vista restricts your choice of the sleep timeout to 1-min, 2, 3, 5, 10, 15, 20, 25, 30, 45-mins, 1-hour, 2, 3, 4, 5-hours and never.
Why not a input box or slider to let us choose any value?
In any case, this is an example of adaptive growth.
A really adaptive notebook will take the environment into account. If the user is near, the notebook will increase the timeout.
I was thinking what a great idea an application sandbox is, then I googled and found that there are already a few!
A native app has full access to the file system and OS resources (limited by the user's access rights). Now that I think of it, this isn't right. The app should run in a sandbox. The user can specify what resources it is allowed to use and which directories it is allowed access to. This will automatically make all apps safe to run. (Hacking into the OS kernel aside.)
IE 7 on Vista runs in a sandbox when UAC (User Access Control) is enabled. That alone should drastically reduce the number of hacking into Windows.
Many programming languages treat char and string as different types. Most of them convert char to string automatically, but not the other way round. Because I started with such programming languages, for a long time I couldn't accept that a character is merely a string of length one.
C treats strings as an array of characters. The language itself has almost no support for strings (except for literal strings). Everything else is in the RTL (Run-Time Library).
This makes it very easy to access strings. Accessing a string is just like accessing an array. Unfortunately, C's array has two shortcomings:
String as an array-of-char has been carried over to many C-like languages, even though strings are real objects and not true arrays. In these languages, both arrays and strings are dynamic — and know their size. They are much easier to use.
Strings, arrays and other dynamic containers should be managed by a memory manager, so that the manager can reclaim the unused memory when memory is low, and shuffle the data around to avoid memory fragmentation. With a manager, it is also easier to over-allocate initially and reclaim the memory in a sweep later.
For example, a dynamic array usually grows exponentially: 0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1k, 2k, 4k and so on. It is too slow initially and too fast later on. A simple tweak is to grow 4x when it is much smaller than the average size, 2x near the average size and 1.5x when it is bigger. Or, we can allocate the average size right away.
This is why I like the so-called "managed" languages more and more. They make it very easy to write an app without being bogged down by needless housekeeping.
I was browsing around and saw the Final Fantasy Advent Children was available as either 720p (6.5 GB) or 1080p (11 GB) downloads. That's even bigger than the DVD release!
720p | 1280x720, 23.976 fps, x264, 5,900 kbps |
1080p | 1920x1080, 23.976 fps, x264, 10,800 kbps |
Audio | Japanese DTS 5.1, 1,509 kbps |
The 1080p version is really an overkill. I'm all for high resolutions, but the file size is ridiculous.
I bought the original DVD a few years back and also downloaded the show to get the English subtitles! (Most Japanese R2 DVDs have no English subtitles. This is very unfortunate for the International market.)
I can't be bothered with the BluRay release. The show doesn't make much sense at all — even to a fanboy. In fact, it is complete nonsense to someone who never played the FF-7 game. No wonder. It started as a short game clip, but the animation costs overran and it was changed to a movie instead. In other words, visual effects over plot coherence.
I'll rather watch the Final Fantasy X and Final Fantasy XII "movies". They are recorded off the video clips within the games. While the animation is below-par, the stories are pretty decent.
Oh wait, this latest release of Final Fantasy Advent Children is 126 minutes and is almost 1/3 longer. Perhaps the story will be more coherent now.
I hate unit testing. Due to my past experience, it is usually testing for the sake of testing. (In other words, a checkbox in the task list.)
I want unit testing to be meaningful. The purpose of unit testing is to ensure that a unit (or component) conforms to its specifications. Yes, a unit test is the specification in executable form. It is not a random bunch of test cases.
I shall write my unit tests with this in mind. It is not as easy as it sounds, however. Specifications usually have many implicit assumptions. For example, they often only cover the correct cases. What about when things go wrong? That's where unit tests come in. It forces you to ensure the error cases are handled properly too.
I can think of two other reasons why unit tests are not popular. One, due to the ever changing code. The unit tests must be rewritten. That's double the work.
Also, the testing infrastructure itself is not stable/consistent. It is often changed and renders the unit tests useless.
In reality, the first reason is a red herring. Interfaces shouldn't change often. It may be changed drastically when it is initially defined, but it should be very stable after a while. That's when you start to write unit tests.
Watching SyncToy synchronize two directories, I realize that, ironically, files that do not change are the ones that require full comparisons. And most files don't change.
SyncToy does not do a full comparison by default. It just uses the file attributes. This works almost 100% of the time, as long as you don't change the file's timestamp. (Windows does not have a built-in method to let the user change a file's timestamp.)
In Windows, a file has three timestamps: create, last accessed and last modified. We are usually interested in the last one only. Using it and the file size, we can quickly determine if the file has been changed.
The other attributes are useful for determining a file rename/move. We can hash the file contents so that we are even more sure the files are the same.
LTA allows you to bid for your vehicle number. Minimum bid is $1,000. However, LTA does not want speculation, so they make it a sealed blind auction. You can't tell how many people bid and how much they are willing to pay. You bid an amount and if you win, you pay that amount. It's totally opaque.
I often hear single digit numbers going for 5-figures. I'm not surprised. However, I'm more interested in the second highest bid. Is it close or is it far?
LTA also makes it difficult to trade car numbers. You need to own both vehicles to transfer the numbers. Also, there's a $1,300 transfer fee (one way), so an exchange will be $2,600.
Does this stop trading? For most part. However, a very small group of people are in position to take advantage of the situation — car traders who have very old laid-off cars. They pay almost nothing to maintain the car, and the vehicle transfer fee is almost zero. Before LTA omitted the #transfers info from their website, it is not surprising to see such vehicles with over 30 transfers.
My notebook allows neither 1280x768 nor 1360x768, the two wide-screen resolutions that my TV accepts. After I upgraded to the latest graphics driver, it allows 1280x768, but I want 1360x768 — my TV's native resolution.
Enter custom video resolution.
Note this magic string: 18 21 50 A0 51 00 1B 30 40 90 13 00 00 00 00
00 00 1C
Undecoded, it means,
Pixel Clock | 84.72 MHz |
Active pixels | 1360 x 768 |
H Sync Pulse | 1424 (start), 1568 (end), 1776 (blank end) |
V Sync Pulse | 769 (start), 772 (end), 795 (blank end) |
Sync | -hsync, +vsync |
(The refresh rate is implied at 60 Hz — due to the clock rate and horizontal scan rate.)
Still greek? Don't worry, these terms are easy to google. What is important is that the graphics drivers these days allow custom video resolutions. A far cry from the past. However, it's still difficult to specify one.
If you take a look at your TV's manual, it will tell you the supported input video resolutions with the clock rate and vertical refresh rate. The computer should be able to calculate the other parameters from these. (The sync values are mostly used for centering the image these days.)
The keyword today is, adaptive.
Windows Vista has a adpative display timeout option. Basically, if you repeatedly turn the display on after it goes off, Vista will lengthen the timeout. Not bad.
(I know just what this is used for. I'm sure everyone has encountered the projected screen going blank during a meeting.)
The explosion of interpreted programming languages in the past ten years has introduced us to two very convenient features: dynamic arrays and garbage collection.
Dynamic arrays are adaptive. You don't have to worry how much memory to allocate. You just use the array. When it runs out of space, it automatically allocates more space. Wonderful.
(The programming environment can only grow the array, though. The programmer has to shrink the array manually, because only he can tell when he is done with the elements. People usually don't bother in these days of multi-GB RAM. Sloppy programming? Not necessary — if the program lifecycle is short.)
The third adaptation is due to an existing browser limitation. Javascript must execute quickly, or it blocks the user interaction. If you want to perform a long operation, you need to split it up.
A naive implementation would process a fixed number of elements per cycle. However, this may be too many for a slow machine and too few for a fast machine. Instead, what we want is to return control to the browser every 250ms, for example. (It is still within the human no-latency threshold.)
Thus, we start off with a small number of elements and increase it until we hit the maximum for a 250ms cycle. We continue to monitor so that it can adapt both up and down. As long as Javascript remains single-threaded, this technique will remain useful.
I actually used this technique once and am sure I will use it more often in the future.
Although the keyboard may be a mundane input device, it can make or break the user experience.
There was a time when the Fn key was the bottom-leftmost key. That's stupid. Everyone expects the Control key to be there. Thankfully, most keyboards now put Ctrl there.
What this keyboard got right:
What it didn't get right:
I can't stand keyboards with a column of keys on the right of Backspace/Enter. They make it hard to "home", especially if you going by feel.
I like the Pg up/down together with the cursor keys, but most notebooks don't put them together, even though there's clearly space for them. It's still not so bad if they are above the Backspace key — they are easy to find by feel — but most keyboards prefer to put Insert/Delete there.
Simple solution: swap the keys physically. I wished it worked like that, but it can be done using a s/w key mapper.
I also wish the Home/End keys are near the cursor keys. This can be done by making the right Shift key a bit smaller. Home/End are not keys you use often, so you can make them perhaps 1/2 the usual size and squeeze them as part of the Shift key.
Also, the touch bar above the keyboard is a good idea, but it is impossible to see in the dark. A cheap solution: use a light desktop background, tilt the screen slightly to illuminate it!
Your Kung Fu is weak!
This line became immortalized by geeks since The Matrix. It is used to describe being bad at something. Example: my google-fu is weak. However, I seldom see it used well.
I like to turn off my display when I know I'm going to go away for a short while. I don't like to wait until the display inactivity timeout kicks in. (Which I usually set to 10 minutes.)
My old SOTA notebook has a special keystroke to do it. I've not seen this in other notebooks. What I do is to set the Windows display inactivity timeout to 1 minute. Works well enough for me.
But Vista buries this option one level deeper than XP. So, instead of right-clicking on the power icon and changing the timeout right away, I have to select the power plan before I can modify its settings.
The nearest I can do is to configure two power plans very closely, but one with a 1-min timeout and the other with a 10-min timeout. It's a workaround.
Another example is the IP address. Previously, we just need to right-click
on the network icon. Now? It's buried so deep that it's faster to type
cmd /k ipconfig
.
These, as well as many other UI changes in Vista, has given me a lot of thought. Basically, no one is going to be 100% satisfied with the UI. The only way is to allow the user to customize it. For example, I should be able to choose between 1-min and 10-min timeout when I click on the power option. Why can't I do that?
It may be surprising, but Windows has the least customizable UI today. Even Linux is ahead.
I write my entries in blog.html, then I rename it to blog-date.html after a while.
This is bad because any links to the new entries are lost when I do the rename. I want my entries to have a permanent link right away.
First, I thought of using the HTML redirection method. This method will redirect you to the latest blog. However, you cannot bookmark the original link because it is replaced by the blog link.
So, right now I'm using a frameset to load the latest blog into the frame. It's transparent to the user, but the browser must support frames.
I'm still searching for a simple solution.
I'm breaking in a new keyboard, and a few keys are starting to wear off: the left shift, left ctrl, 'a' and 's'. I'm surprised the spacebar doesn't show it yet.
Shift is used almost every sentence. Ctrl is used for a variety of shortcuts, but most commonly ctrl-c (copy), ctrl-x (cut) and ctrl-v (paste).
The 's' key should be because I use ctrl-s quite often (save). The 'a' key could be just a natural wear-and-tear. It's even ahead of 'e'.
I expect 'z' and 'x' to show some wear in the future too, because they are used as back and forward in Opera. (Up till Opera 9.2. They were changed to some other keys, but I am so used to the Opera 9.2 mapping that I used it even today.)
The nice feel of the keys don't last. Once the thin rubbery layer wears off, the keys regain their plasticky feel.
Macross was not a show destined for success. It was meant to be just another run-of-the-mill robot show in the early 80s, but something amazing happened along the way.
It is said that Macross started as a parody of Gundam, but you won't know that by watching the show. I suspect it may have been conceived that way, but it got serious when the writers realized they got a good story on their hands.
The troubled production history was what made the show stand out. It was conceived as a 48-episodes show, but was pruned to 27 episodes to end at a big battle due to budget constraint. After the show started, it was very popular, so 9 more episodes were added to depict the aftermath. It is commonly accepted that while the first 27 episodes are good (story-wise; the animation sucks), it is the last 9 episodes that made Macross memorable.
First, it allowed the love triangle to play out. Minmay was supposed to win the love triangle originally — or remain unresolved, perhaps — but with the new episodes, Misa was given more characterization that allowed Hikaru (our hero) to fall in love with her — slowly. Who won and who should have won remains a debate among fans today, 27 years later.
Second, the aftermath wasn't pretty. It's not a lived-happily-ever after the big battle. No, there were resource issues, discontentment, rebellions and yearning for the old way of life. Very few live-action shows touched on the aftermath, much less cartoons. This made Macross a very serious show indeed.
I've tried the Ramly burger in Singapore many times. None matched what I ate in JB (Malaysia).
Why is that? I'm not sure. They use the same ingredients, which are basically an overdose of chilli powder, pepper, chilli sauce and some light soy sauce. The Singapore version even has cheese and some sort of honey mustard (my guess). However, you never taste the sauce much. The JB one is overwhelming.
Some people said it's due to the patty. However, I don't think so. After observing closely how the Ramly burger is prepared, I come to this conclusion: it's because the burgers in Singapore are not sliced into two. As a result, the chilli powder and pepper are "boiled" away.
Another possibility is that the powders and chilli sauce are impotent. This is also plausible, especially for the chilli sauce. The burger simply does not taste hot at all.
I will request the sellers to slice the patty into two in the future. I have not met a single seller who does this. I wonder why. If they are not able to do it, then I will ask them to put the chilli powder and pepper last.
Also, the default Ramly burger in Singapore comes with the egg. I prefer not to have the egg because its taste is too strong.
There are 5 kinds of Destroids in Macross:
The first two has been released as toys in the 1/60 scale. The third is in the pipeline. It is speculated that Spartan will not be released, because it doesn't share any parts with the other Destroids. (The first three have the same lower body/legs).
The Monster, full name the VB-6 Koenig Monster, has been released in the 1/100 scale a few years back. It appeared in Macross Zero and was released together with the valkyries. It was made a fully transformable robot, because at that time it was thought only transformable robots would sell. In the original show, it was a simple Destroid, but it was made transformable in a PS-X game.
The Monster is a heavy-duty Destroid. Even in 1/100 scale, it is bigger than the other Destroids in 1/60. Needless to say, it was also expensive and had a very low production run. The prototype unit was shown in 1/60 scale, but it was eventually produced in 1/100.
I am very sure the Monster will be made in the 1/60 scale eventually. The simple reason is because it appears in Macross Frontier! That's how you increase the appeal for such toys: keep using them in new shows! Luckily, the Monster is a design that has withstood the test of time.
In Macross Frontier, the Monster appears in the plane and Destroid mode only. A mobile heavy-duty Destroid makes a lot of sense. The robot mode doesn't.
Macross Frontier also uses the traditional Meltran Queadluun-Rau mecha (also made its first appearance in the original show). This has been released in 1/60 scale a few years ago, but it must have been unpopular. Well, use the same trick to get new fans!
The SDF, the human mothership in the original series, also made a short appearance as a wreak. While not the SDF-1, it looks exactly the same. Should I mention that the SDF will be released in the 1/2000 scale soon? It's around 60cm and is listed at US$500.
Nostalgia is a wonderful thing. It leaves you with the good memories of the past, whereas bad/unpleasant parts are simply forgotten.
A friend of mine was admiring an old Transformers seeker in plane mode and wondered why he got the Masterpiece version. The MP Starscream was very nice in both modes, but it was hard to transform. It's like a jigsaw puzzle.
I told him that the robot mode was very ugly and had very little articulation, but he was skeptical because he had forgotten about it. Later, I saw a seeker in robot mode and showed it to him. He was shocked.
Beware of the nostalgia trap.
Speaking of MP Starscream, the designer is Kawamori Shouji. Any Macross valkyrie fan will recognize the name: the valkyrie designer.
It is said that Kawamori created the original diaclone seeker in the mid-70s that became Starscream, before he went to design the valkyries. If this is true, he has come a full circle. However, this information is not verified. (He designed diaclone toys in the early 80s, but I doubt he designed the seeker. He would have mentioned it in his MP Starscream interview.)
Macross Frontier is the latest Macross show. The 25-episode TV series ended September last year, but I didn't bother to watch it. It was very hyped up and I was very sure it wouldn't live up to its expectations.
A Macross show has much to live up to. It must have valkyries (transforming planes), a songstress, a love triangle, and an alien race that is won over by love, rather than by might. (Macross is constrained by its legacy just as Gundam is. Everyone knows what a Gundam show must have too.)
Since the original series, Macross has more misses than hits.
Recently, I came across a detailed episode-by-episode synopsis. I read it and thought, "it's not too bad!", so I watched it. I must say Macross Frontier managed to pull it off. In fact, I'm rather surprised how well it does so.
The animation is very good. It is definitely more than acceptable for a TV show, even though some people are still not satisfied with it. Even the 3D scenes are integrated very well.
The plot is fairly okay, but I dislike the use of implants and cyborgs, aka Ghost in the Shell and The Matrix. Once you go this route, the stakes get raised very high very fast.
The songs in Macross Frontier are acceptable. Only one or two can be called good. Every song from the original show is good. I'm biased, obviously.
I have no comments on the valkyries. I'm not really a valkyrie person. I like the VF-1, because it is based on the F-14 Tomcat and I love the F-14. In Macross Frontier, they base the VF-25 on the VF-1. Good move! The VF-1 is by far the most popular valkyrie.
There are comments on excessive fan-service. I was afraid of that, because it would be hard to introduce to newbies. I thought it was acceptable (for most parts). Sadly, it did not escape from some anime cliches, such as girls with big breasts.
Some fans even complain about excessive homage to older Macross shows. There are a ton of them, especially in the earlier episodes. Most are quite well done. They remind me just how awesome the earlier shows are. Macross: DYRL (the movie) was released in 1984 and could pass for a new show even today. It is that good visually. (The story is a retelling of the TV-series and is pretty good too.) When I listen to the song, Do You Remember Love, I know how the Zentradi feels — awe, wonder, shock, deculture. It still stops me literally.
Another common complaint is the lack of resolution to the love triangle, unlike in the original show. Well, this is pretty common these days because it leaves room open for sequels. (What do you expect?)
I enjoy Macross Frontier as a show, but I consider it distinct from the original Macross. To me, Macross is just the original TV series and DYRL. The original characters, the valkyries, the SDF-1, the Zentradi, the songs, the culture shock, that's what made me a fan.
The follow-ons all involve distinct characters and timeline. They are part of the Macross universe, but they are not the Macross. It is said that Macross Frontier will be the Macross for a new generation of fans. If that's what it takes for the Macross legacy to live on, bring it on!
I have decided to unify my three blogs:
They will stay as-is, but will no longer be updated.
The main reason for having just one blog is so that I can comment more freely without being constrained by the three categories.
I will also start to use tags to identify the contents. I will use broad categories first. Use of specific tags should be semi-automated in the future.
Category | Jan | Feb | Mar | Apr |
---|---|---|---|---|
Basic | 1,061.55 | 985.02 | 2,574.12 | 685.04 |
Cash | 213.60 | 241.50 | 172.10 | 151.00 |
Vehicle | 258.35 | 307.97 | 577.60 | 1,886.78 |
Others | 116.30 | 875.00 | 308.15 | 391.94 |
Total | 1,649.80 | 2,409.49 | 3,631.97 | 3,114.76 |
Basic expenses is low because my father declined to take the parents' allowance and that I wasn't charged for the phone/broadband.
Vehicle expenses are high because I changed the MX-5's gearbox and other parts ($1,680).
The bulk of other expenses are MP-08 Grimlock ($121 excluding deposit), webhosting ($120) and PR renewal ($50).