Minister's cautionary note that not all old flats undergo Sers may force buyers to weigh not just location and size
Last year, Ms Siah Yuet Whey bought a Housing Board flat that is older than she is.
She is 28 years old. It is 44.
This means that she will most likely outlive its lease - which runs out in 55 years. She will be 83 then.
That did not stop her and her husband, 31, from paying more than $700,000 for the three-room unit in Jalan Ma'mor, in Whampoa.
At 861 sq ft, it works out to $854 per sq ft - the third-highest amount paid last year for flats with less than 60 years of lease left. "It is a rare terraced unit in an area with a lot of character, and it does not feel like it is very old at all. We think it is a fair price," said Ms Wong.
Last Friday, National Development Minister Lawrence Wong, alarmed by a news report on old HDB flats that fetched high prices, sounded a cautionary note about such buying behaviour. Some appeared to have bought those units on the assumption that their flats will benefit from the Selective En bloc Redevelopment Scheme (Sers), he said in a blog post.
This is not so, he said. Only a small minority qualify for Sers, which compensates home owners for their flats and gives them new ones with fresh leases. The rest of the flats will return to the state when their leases expire.
In particular, he advised younger couples to buy a home "that covers you and your spouse to age 95".
Mr Wong's comments have attracted a mix of bewilderment and concern, especially among those who live in old units.
"It wasn't cheap, but I thought the value will keep going up," said IT engineer Andy Zhang, 40, who also paid top dollar for an older home.
Last January, he paid $950,000 for his five-room Bukit Timah flat. It is 43 years old and has just 56 years of lease remaining. This was last year's record for an HDB unit with less than 60 years left on the lease.
To pay for it, Mr Zhang sold a newer three-room flat in Clementi.
Still, he said, he has no regrets. "The age of this flat was not an immediate consideration. I bought this place because my daughter's school is nearby and the location is good."
Flats of a certain vintage are more popular on the resale market due to factors such as location - they tend to be in mature estates - size and amenities in the neighbourhood.
Statistics show they account for a disproportionate share of transactions in the HDB resale market.
Almost half of all resales last year were of flats older than 30 years. This is even though such flats make up only around one-third of the HDB housing stock.
Even flats older than 40 sell well, despite loan restrictions on how much buyers can withdraw from their Central Provident Fund to finance such purchases. They form 11 per cent of transactions from 2014 to last year, even though just 7 per cent of all HDB flats are of that age.
Mr Zhang, for instance, said his resale flat - at 1,346 sq ft - is larger than today's five-room units, an important feature for his family of four.
Ms Siah, a property analyst, is confident she can find a buyer within the next five years or so, due to the rarity of HDB terraced units. There are only 285 left in Singapore.
But generally, she acknowledged, buyers will be more cautious about older flats following Mr Wong's warning. Said the chief executive of property portal Digital Real Estate Assistant: "Location is still the prime factor, not age. But his comments mean that people may be a lot more concerned about the age component now."
ERA Realty key executive officer Eugene Lim said: "It is going to be a lot more difficult to find buyers for older resale HDB flats. Prices for these flats may even take a big hit due to lack of demand. From now on, it is quite likely home buyers will view older 99-year flats differently."
Still, the problem looms far in the future for Mr Zhang, now more concerned about the living conditions for his family.
Asked what his plans are if his home cannot keep its value or if he is unable to find a buyer, he laughed. "That is something to worry about in 40 to 50 years' time. Who knows if I will still be around."
Many people will blame the Government for this situation, but it is the people who ignore reality.
The Government has been discouraging sale of old flats via a "subtle" market mechanism: loan restrictions. But too many people are cash rich.
People think there will be value left after the lease expired, despite black letter agreement saying no. Now the Government has to come out and say it.
Logistically, it is impossible to replace all the old flats. There are just too many of them. One-third of HDB flats are now older than 30 years old (pre-1987).
IMO, when significant number of flats are 50 years old — in 20 years! — the Government will be forced to come out with some extension scheme.
It will be harder to pass inspection from next year onwards.
Cars:
Registered | Now | Apr '18 |
---|---|---|
>= 1/2001 | 3.5% CO | 1% CO, 300 ppm hydrocarbons |
>= 4/2014 | " | 0.3% CO, 200 ppm @ 2k RPM |
Motorcycles:
Registered | 4-stroke | 2-stroke |
---|---|---|
>= 7/2003 | 2,000 ppm | 7,800ppm |
>= 10/2014 | 1,000 ppm, 3% CO | – |
This kills two birds with one stone. First, it ensures cleaner vehicles — Singapore has 956,430 vehicles! Second, this might deter people from renewing their car's COE.
If the car cannot pass inspection, its road tax cannot be renewed...
Now that we know to scale data access and operations, we quickly bump into the next bottleneck: the interface.
If the interface is designed to accept only one entry at a time, we cannot scale even if we want to. For example:
ip_addrs=($a $b $c $d $e) for ip_addr in "${ip_addrs[@]}"; do curl "$svs_url?cmd=update&ip_addr=$(urlencode $ip_addr)" done
This calls curl five times. And each time, the server can only process one entry.
The great thing about dynamic languages is that they allow dynamic types, so let's make use of it:
ip_addrs=($a $b $c $d $e) ip_addr_qs= for ip_addr in "${ip_addrs[@]}"; do ip_addr_qs="$ip_addr_qs&ip_addr\[\]=$(urlencode $ip_addr)" done curl "$svs_url?cmd=update$ip_addr_qs"
It is convention (started by PHP?) that []
suffix means an
array. Let's make the server accept both scalar and array:
$ip_addr_arr = $_GET["ip_addr"]; if(!is_array($ip_addr_arr)) $ip_addr_arr = array($ip_addr_arr);
By doing this, we only call curl once and the server can process the entries as a batch.
As a rule of thumb, if an interface is called in a loop to process the entries, it should allow multiple entries be passed in with one call.
There are some "designers" who resist this. Sorry, they are wrong.
Note: in this example, I use GET method to do processing. This is not a good practice, because GET is supposed to be idempotent and cacheable. I make this mistake all the time.
Suppose we want to check the status of a bunch of devices on the network. Obviously we start with one:
$status = check_status($ip_addr);
Then we scale it up with a for
loop:
foreach($dev_arr as &$dev) $dev["status"] = check_status($dev["ip_addr"]);
Um, no.
The reason is very simple. Network operations are very slow. A single check may take 50 to 100 ms. Just checking 10 devices will take 500ms to 1s!
We need to check the devices in parallel.
In traditional programming, a logical way is to make the program multi-threaded. In a naive implementation, we will spawn one thread per device. But this can overwhelm the machine temporarily if we are not careful. A smarter implementation will use a thread pool to check n devices in parallel.
If we are restricted to one thread (this is a common restriction among
scripting languages), we have to use the select()
pattern.
Suppose we check the device status with curl, it supports a mode of operation called multi curl. Basically, it allows multiple curl operations at the same time. It is hard enough to get right that it is best written once as a helper function and reused across projects.
In addition to handling the multi curl state machine, we need to make decisions such as:
Our code now looks like this:
check_all_status($dev_arr);
All the work is hidden.
Checking 10 devices is as fast as checking the slowest device. It will typically take 50 to 100 ms — the same as checking one device!
This really shows the importance of scaling.
What if we need to query the device again, depending on the result from the first check?
Previously, it was very clean:
foreach($dev_arr as &$dev) { $dev["status"] = check_status($dev["ip_addr"]); if($dev["status"] == something) check_detailed_status($dev["ip_addr"], url_1); else check_detailed_status($dev["ip_addr"], url_2); }
But this is the slowest code you can ever write.
After modifying to use multi curl:
check_all_status($dev_arr); $next_dev_arr = array(); foreach($dev_arr as $dev) { if($dev["status"] == something) $next_dev_arr[] = url_1; else $next_dev_arr[] = url_2; } check_all_detailed_status($dev_arr, $next_dev_arr);
Better, but not optimal. There is a gap between the first and second part — the first part must complete before the second part starts.
To be optimal, we need to be able to overlap the two parts. When a device finishes the first part, it will go on to the second part.
Needless to say, the code is now much more complicated.
And this is another lesson: the structures of the simple code and the optimal code are totally different. You cannot modify one to the other. It must be totally redesigned and rewritten.
Adding a new row to SQLite is a basic operation. Can it go wrong?
INSERT INTO tbl(c1, c2, c3) VALUES (v1, v2, v3);
Scaling it naively to 5 rows:
INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13); INSERT INTO tbl(c1, c2, c3) VALUES (v21, v22, v23); INSERT INTO tbl(c1, c2, c3) VALUES (v31, v32, v33); INSERT INTO tbl(c1, c2, c3) VALUES (v41, v42, v43); INSERT INTO tbl(c1, c2, c3) VALUES (v51, v52, v53);
It works, but performance drops... drastically. It is not really noticeable at 5, but it is at 50, and 50 is not really a big number.
What went wrong?
By default, each operation is an implicit transaction. To execute multiple statements, we should use a bulk statement or wrap them in a transaction.
This works (from SQLite 3.7.11 onwards):
INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13), (v21, v22, v23), (v31, v32, v33), (v41, v42, v43), (v51, v52, v53);
Or this:
BEGIN TRANSACTION INSERT INTO tbl(c1, c2, c3) VALUES (v11, v12, v13); INSERT INTO tbl(c1, c2, c3) VALUES (v21, v22, v23); INSERT INTO tbl(c1, c2, c3) VALUES (v31, v32, v33); INSERT INTO tbl(c1, c2, c3) VALUES (v41, v42, v43); INSERT INTO tbl(c1, c2, c3) VALUES (v51, v52, v53); END TRANSACTION
An INSERT statement with implicit transaction may take 2ms, so 50 rows will take 100ms. A bulk statement or one transaction takes just 4 - 10 ms. The overhead is that significant.
Database access is one of the most fundamental operations. Being correct is not good enough. We have to be optimal.
The first version of vtec is finally done! :clap:
It is a distributed video encoder. It distributes videos to a farm of machines to encode. This is useful if there are many videos to encode.
There are three components:
The implementation is simplified by the fact that the machines can access one another via NFS over a Gigabit network.
Correction: only the engine part is done. There is a progress webpage — rendered purely in PHP, no JavaScript, first for me :lol: — but there is no webpage nor REST API to add jobs. I add them to the SQLite database directly. :-O
The video encoder is a frontend to HandBrakeCLI. It supports dir-level encoding options, pre-, post-processing, and naming the output files in a consistent manner. This is an entire solution in its own right and has been more-or-less field tested.
There are three parts to the distributed workers: the worker itself, a controller, and a progress monitor. They are all shell scripts.
The worker polls the job server for new jobs and calls the video encoder. There can be multiple workers per machine. Each uses a pre-defined set of CPU cores. Originally, they sleep in short durations (no longer than 2 mins) in order to respond quickly to a new job. Now, they are put into long sleep* and the per-machine controller will wake them up.
The per-machine progress monitor sends the job progress to the job server. The worker cannot do this because it calls video encoder synchronously and is blocked while waiting for it to finish. The progress monitor will go into long sleep when there are no active jobs. This functionality has been folded into the controller. It makes the controller a little more complex, but there is one less command to run.
* What is long sleep? We make the script block somehow (using 0% CPU) and then send it a signal to make it resume. It's like an interrupt. :lol:
This architecture came about because the worker was developed first. Then with multiple workers, I found the short sleep to be inefficient. Hence the controller.
A better solution is to have just the controller and let it spawn workers as needed (up to the predefined limit for that machine).
An even better solution is to have just one controller per farm and let it create workers as needed remotely through ssh.
This illustrates the scaling problem. A solution that works may not scale optimally to a large data set. It is often necessary to redesign.
Why not just design a large-scale solution upfront? There are three reasons. First, it is a lot more work. Second, we do not know if we scale correctly (too little, useless; too much, overkill). Third, it may not be needed.
I still prefer to do it the old-fashioned way. Do a simple solution, see its bottlenecks and then decide how much to redesign.
For example, I was not sure what the I/O load would be on the file server — where the files reside (which can be different from the job server) — when there are 20+ concurrent encodings. But from preliminary results, it seems to be negligible.
After encoding, a worker will wait depending on his 5-min load before getting a new job if there are idle workers. This is to let other (less loaded) workers have first dip.
If there is no job, the job server will return the sleep time to the controller depending whether other controllers are waking up soon. This allows the farm to respond to new jobs in a timely manner.
Each worker uses a fixed number of cores. It is not adaptive. Suppose we have half as many videos as workers, we can speed up encoding by pausing half the workers and letting the other half use twice as many cores.
What if we only have one video to encode? Only one worker is doing the work. We can break up the video into chunks and let each worker encode each block, then merge them back into one stream when all are done. This is especially helpful for HEVC, which is slow as moses.
Effectively immediately (the second Feb 2017 COE bidding), motorcycle ARF will be tiered:
OMV | ARF |
---|---|
First $5k | 15% |
Next $5k | 50% |
Above $10k | 100% |
As usual, LTA claims this is okay because the majority of buyers are not affected.
LTA does not have statistics by motorcycle OMV, so we will use the motorcycle CC as proxy.
CC | 2006 | 2011 | 2016 |
---|---|---|---|
<=200 | 110,326 | 110,188 | 97,924 |
<=500 | 21,720 | 21,575 | 23,237 |
>500 | 9,832 | 13,917 | 21,278 |
Indeed, there is a growing trend towards big bikes.
10% of de-registered motorcycle COEs go into cat E. That has been blamed for fewer COEs. LTA will stop that now. But really, the elephant in the room is that more motorcycle COEs are being renewed:
#Years | 2006 | 2011 | 2016 |
---|---|---|---|
<=10 | 108,230 | 104,186 | 86,535 |
<=20 | 19,667 | 30,251 | 46,823 |
>20 | 13,984 | 11,243 | 9,081 |
It is a vicious cycle: lower COE quota leads to higher COE causing more renewal resulting in lower quota.
For bikes, it is a no-brainer. There is no PARF to get back, so why not pay the prevailing COE and keep your bike?
I have the ultimate killer suggestion for LTA: owners have to pay half the vehicle's OMV to renew its COE.
For cars, this means giving up the PARF and then paying half the OMV.
This will prevent people from renewing COEs in perpetuity (since there is no further penalty at 20th year).
The price of water is broken down into four parts: tariff, water conservation tax (WCT), waterborne fee (WBF) and sanitary appliance fee!
7/2000 | 7/2017 | 7/2018 | |
---|---|---|---|
Tariff | $1.17 | $1.19 | $1.21 |
WCT | 30% | 35% | 50% |
Total price | $2.10 | $2.39 | $2.74 |
Tariff (>40m3) | $1.40 | $1.46 | $1.52 |
WCT | 45% | 50% | 65% |
Total price | $2.61 | $3.21 | $3.69 |
Still need to add GST to the total price. :-O
It is instructive to see the previous water tariffs. The last big increase was done over four years:
<7/97 | 7/97 | 7/98 | 7/99 | |
---|---|---|---|---|
Tariff | $0.56 | $0.73 | $0.87 | $1.03 |
WCT | 0% | 10% | 20% | 25% |
WBF | $0.10 | $0.10 | $0.20 | $0.25 |
Tariff (>20m3) | $0.80 | $0.90 | $0.98 | $1.06 |
WCT | 15% | 20% | 25% | 30% |
WBF | $0.10 | $0.15 | $0.20 | $0.25 |
Tariff (>40m3) | $1.17 | $1.21 | $1.24 | $1.33 |
WCT | 15% | 25% | 35% | 40% |
WBF | $0.10 | $0.15 | $0.20 | $0.25 |
Water was so cheap before 1997? Wow, I don't remember.
Water tariff is tiered. PUB should create a new category to encourage people to conserve water. My proposal:
Price | |
---|---|
<5m3 | $2.10 |
<40m3 | $2.74 |
>=40m3 | $3.69 |
But the Government prefers to give out (annual) U-Save Rebate:
1-, 2-room | $260 | +$120 |
3-room | $240 | +$100 |
4-room | $220 | +$80 |
5-room | $200 | +$60 |
EC | $180 | +$40 |
It has two advantages: it targets only Singaporeans, and it makes the receivers beholden to the Government.
Just a quick note. GST has not increased for 10 years already. :lol:
The Earth is special in our Solar System:
* I'm going to postulate that this determines if a planet is "alive" or not.
1 Because it is in the habitable zone. But 3 billion years ago, our Sun was 30% less bright and Earth should have been too cold for surface water. This is the faint young Sun paradox.
2 Probably possible due to the lubricating effect of water.
Venus cannot be ignored. It is our sister planet — 95% of Earth's diameter, 80% surface area and 81% the mass. Yet it is so different.
The last two points are very interesting to me. When did Venus start to have its slow retrograde rotation? (Basically, it was game over once this happened.) What caused it? Was it caused by an impact? (Likely.) But where was the impactor?
Was it recent or in the distant past? Could it have life before that? Venus was in the optimium habitable zone 3 - 4 billion years ago.
What happened 500 million years ago?
Planetary scientists think a runaway greenhouse effect had occurred on Venus, causing its demise. This is basically positive feedback running amok. People are worried about global warming on Earth because they fear the same may happen here.
Personally, I'll prefer to investigate the mysteries of Venus than to explore Mars. It is too small to hold onto semi-light gases such as oxygen.
A yellow star*. Four rocky inner planets and four giant gas/ice outer planets.
How typical is our solar system?
Consider these.
Our Sun is not a very big star. Nevertheless, it is in the 90th percentile by brightness in the Milky Way. Red dwarfs and giant stars are not hospitable to life.2
Something stablized Jupiter's inward spiraling orbit. See Hot Jupiters in other systems.
Comets brought water from the outer solar system to Earth soon after its formation. No water, no life. Okay, this one may not be so rare.
Earth has a large moon that stabilizes its orbital axis, making its climate stable. This may or may not be an issue — life is tough if it can take root.
Earth has a magnetic field. This is very simple. No magnetic field, no life. It shields the Earth from the Sun's solar wind of charged particles that would strip away the ozone layer, which in turn absorbs UV radiation, which is lethal to life.
* Our Sun is actually white. It is a G-type main sequence star that is sometimes nicknamed "yellow dwarf".
2 Never say never, but it would be extremely challenging, particularly for intelligent life to evolve.
x265 is still prone to smoothing. These options are recommended to retain details (some are new):
--tune grain --aq-mode 3 --no-sao --no-strong-intra-smoothing --ctu 32 --max-tu-size 16 --tu-inter-depth 2 --tu-intra-depth 2
--tune grain. Retains details, but increases bitrate substantially.
--aq-mode 3 biases toward dark regions and reduces banding in 8-bit color depth.
--no-sao. SAO (Sample Adaptive Offset) is also known as smooth-all-objects. :lol:
--no-strong-intra-smoothing. With a name like this, of course you want to turn it off.
--ctu 32, --max-tu-size 16. For HD and lower encodes. The default CTU of 64 pixels is only suitable for UHD encodes. For 480p encodes, I'm considering using CTU of 16. (Smaller blocks = more details.)
--tu-inter-depth 2, --tu-intra-depth 2. Increase search depth to use smaller TU.
Grainy blu-ray source. CRF 22, slow preset.
Intel Xeon CPU E5-2670 v2 @ 2.50 GHz. (3 real and 3 HT cores are used.)
Preset | FPS | QP | kbps |
---|---|---|---|
slow | 1.767902 | 25.77 | 6,475.79 |
+fine | 1.609524 | 25.87 | 6,742.30 |
+aq-mode 3 | 1.535781 | 24.56 | 10,012.88 |
+aq-mode 3, fine | 1.415599 | 24.66 | 10,454.77 |
+grain | 1.213808 | 24.17 | 16,183.83 |
+grain, aq-mode 3, fine | 1.178303 | 24.17 | 16,069.47 |
The fine settings increase bitrate a little. That is expected as it retains more details. aq-mode 3 really blows the bitrate. grain really takes the cake.
Preset | FPS | QP | kbps |
---|---|---|---|
slower | 0.536763 | 25.94 | 7,318.01 |
+tweaked | 1.282377 | 25.83 | 6,614.38 |
+tweaked, fine | 1.240994 | 25.98 | 6,731.26 |
Tweaked:
--bframes 6 (was 8) --rc-lookahead 40 (was 30) --lookahead-slices 2 (was 4) --rd 4 (was 6)
The main effect is due to --rd 6. It is really slow.
To try. They won't help as much, though:
--preset veryslow --rd 6 --amp --no-rskip --aq-motion (v2.2+ only) --deblock -3:-1
I have not tested these.
Grainy blu-ray source. CRF 22.
Intel Xeon CPU E5-2670 v2 @ 2.50 GHz. (3 real and 3 HT cores are used.)
Preset | FPS | QP | kbps |
---|---|---|---|
ultrafast | 16.943726 | 28.68 | 3,486.33 |
superfast | 12.737048 | 28.44 | 4,007.72 |
veryfast | 7.922411 | 25.92 | 5,659.28 |
faster | 7.747071 | 25.92 | 5,660.07 |
fast | 6.836653 | 25.80 | 5,699.99 |
medium | 4.054692 | 25.91 | 6,593.67 |
slow | 1.767902 | 25.77 | 6,475.79 |
slower | 0.536763 | 25.94 | 7,318.01 |
veryslow | 0.365613 | 25.85 | 7,187.14 |
placebo | 0.145978 | 25.89 | 7,371.47 |
Intel Xeon CPU E5-2660 v3 @ 2.60 GHz. (3 real and 3 HT cores are used.)
Preset | FPS | QP | kbps |
---|---|---|---|
ultrafast | 23.907555 | 28.69 | 3,475.09 |
superfast | 18.179565 | 28.45 | 4,006.14 |
veryfast | 12.346167 | – | 5,659.26 |
faster | 12.036365 | – | – |
fast | 10.682315 | – | – |
medium | 6.119694 | – | – |
slow | 2.551558 | – | – |
slower | 0.732159 | – | – |
veryslow | 0.490835 | – | – |
placebo | 0.189853 | – | – |
AVX2 is not bit-identical.
medium and slower presets are slower than in v1.9. Bit rate is now much higher.
There is a big decrease in speed from medium to slow and slow to slower.
I'll probably use a tweaked slow preset. It is the slowest I can bear.
HandBrakeCLI fails to compile on RHEL 6.x due to four missing components:
Edit make/include/main.defs, move these out of the
if
block (line 44):
MODULES += contrib/jansson MODULES += contrib/lame MODULES += contrib/libopus MODULES += contrib/x264
I was not able to use the included harfbuzz. I had to download it (1.4.2), build and install it separately.
Note: this is assuming the system has compiled HandBrakeCLI 0.10.x before. Otherwise, it requires more components.
There is official instructions to build HB on RHEL 6.x, but (i) I missed it and (ii) it is slightly more complex.
HandBrake 1.0.2 brings with it x264 r148 (was r146 in 0.10.x) and x265 v2.1 (was v1.9 in 0.10.5).
Date | #Attempts | root % | #IP addr |
---|---|---|---|
2015/9 | 904,990 | 96.6% | 484 |
2015/10 | 426,787 | 95.8% | 335 |
2016/9 | 345,780 | 86.5% | 609 |
2016/10 | 425,678 | 92.4% | 608 |
2016/11 | 26,560 | 90.1% | 239 |
2016/12 | 9,320 | 61.9% | 473 |
2017/1 | 14,591 | 62.7% | 1,289 |
Count attempts:
sudo last -f /var/log/btmp.1 | head -n -2 | wc -l
Count root attempts:
sudo last -f /var/log/btmp.1 | head -n -2 | grep "^root " | wc -l
Count IP address:
sudo last -f /var/log/btmp.1 | head -n -2 | awk '{print $3}' | sort | uniq | wc -l
The SSH login attempts dropped like a stone after I implemented the defense mechanism. :-D
It went dead silent for a while. However, I saw a suspicious pattern after a while. By right I only allow 12 attempts every two minutes per IP address, but I can see more attempts than that. I suspect the attacker made a bunch of connections first, then proceeded with the attempts.
I need to come up with even more aggressive heuristics.
FS | Space | %Use |
---|---|---|
/ | 25 GB | 45% |
/var | 4 GB | 66% |
/var/log | 4 GB | 62% |
/var/tmp | 2 GB | 1% |
/tmp | 8 GB | 1% |
unalloc | 9 GB | – |
swap | 8 GB | – |
/mnt/work | 266 GB | 50% |
/mnt/data | 592 GB | 56% |
/mnt/speedy * | 118 GB | 38% |
(No change.)
* speedy is a SSD.
FS | Space | %Use |
---|---|---|
/ | 25 GB | 44% |
/var | 4 GB | 64% |
/var/log | 6 GB | 3% |
/var/tmp | 2 GB | 1% |
/tmp * | 10 GB | 1% |
unalloc | 12 GB | – |
swap | 16 GB | – |
/mnt/work | 268 GB | 55% |
/mnt/data | 586 GB | 91% |
/mnt/archive | 1.8 TB | 64% |
/mnt/speedy | 235 GB | 11% |
* /tmp is mounted as tmpfs.
I finally resized /var/log from 2 GB to 6 GB.
swap is reduced from 32 GB to 16 GB, since it is hardly ever used.
"This book will help you become a better programmer", so boldly claimed the authors in the preface. This book is about the pragmatic stuff, hence its title. No fancy architecture or buzzword-of-the-day.
I don't agree 100% with the authors, nor do I think it is necessary to do everything they say, but one can already be very effective using 60% of their advice.
If there is one book I think all programmers should read, this would be it. I like this book so much that I have bought it four times over the years. I lost it twice (lent and not returned) and gave one away.
This book will help you become a better programmer. Yes, seriously. Go read it today!
The current standard model of cosmology says that the total mass-energy of the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.
Dark matter and dark energy provide very simple and clean explanations for the phenomena we observe, but I think they are wrong.
The evidence for dark matter is that the outer rims of galaxies rotate faster than the visible mass indicate, hence their existence. If I have to add 5x the known matter to make my model work, I think my model is wrong. I simply think there is another explanation for this, one that we do not understand yet.
The evidence for dark energy is even shakier. Galaxies are moving apart faster than expected. Dark energy is the one that provides the force. Again, we need to use a very large number to make it work. For this, I simply assume the way we measure distance (billion of years back in time) is wrong. :lol:
This is one book by Scott Meyers that I won't be buying.
I have his other books:
These are required reading. Otherwise, you are not even aware of C++'s many traps and pitfalls.
I have renounced C++. Since 2006 or so. The C++ I know is C++98 and later C++03. C++11 is almost a new language — I cannot read it. It is now C++14.
I used to like C++ — it was the only language I would use. But after several years, I realized I spent more of my time "fighting" the language than solving my problems! It gave the illusion of power, but it was actually very limiting.*
I decided to go back to the basics: C. In C, you only "pay for what you use". This was the motto of early C++, but it was never true.
For dynamic stuff, I use JavaScript — and later PHP for server-side processing. It is liberating to use dynamic languages. Strings and arrays are first-class objects. Associative arrays and regular expressions are available and are very useful. Loose typing works — learn to let go :-P. There is no need to worry about memory management. You just concentrate on solving your problem.
*At the intermediate level. Template metaprogramming is very powerful, but oh boy, the syntax and the error messages. And you will waste a lot of time fiddling with it instead of solving your problem.
These are some projects I have in mind. Some have been on the backburner for years. :-(
Miniature lighting. I've always wanted to light up my (yet to be constructed :-P) Lego city. But I don't just want a simple on/off switch. I want to control individual lightings — street lights, house lights, etc. How to do that with minimal wiring? This is phase 1. Phase 2 is lighting up the vehicles. O_O
Miniature painting. I intend to paint some of my board game tokens to personalize them. This is a special case of the next project.
Pimp boardgames. There are third-party tokens, but they are very expensive. I'm now inclined to 3D-print the parts and paint them myself. (I like to do things the hard way.)
Update server. Change to 64-bit Ubuntu, get USB 3.0 ports working.
Backup file checksum. Use checksum to make sure files are copied correctly. I have encountered very rare cases where files were corrupted silently. Currently I'm running sha1sum manually.
Alerts. Alert me when things happen, e.g. when the Toto jackpot exceeds S$4 million. :lol:
Comic on-demand. To view my comic collection over the network without having to unzip the files manually.
Real-time info: bus, carpark.
Concurrent video encoder. Upgrade to HandBrake 1.0. Create Web frontend. Standardize encoding settings. Re-encode videos.
IP hammer. It does not auto-populate the block list on power cycle. To do.
Improve my programming toolkit. To enhance my library of code so that I can implement solutions faster.
Record 19 HDB resale flats sold over S$1 million in 2016. 11 were from Pinnacle@Duxton. Others were from City View @ Boon Keng and Natura Loft in Bishan.
(Note: Resale Flat Prices at data.gov.sg shows only 3 flats above S$1 mil.)
Is 19 shocking? There were 12 in 2015.
Personally, the figure that gives me more pause is that most resale 5-room flats are above S$500k. Even worse, almost all 3-room flats and above are above S$300k. There is no cheap housing in Singapore.
Housing and private transport are the two biggest money drains in Singapore. These take years — perhaps even the entire working life — to pay off. Can't blame people for wanting stable jobs.
The resolutions are the same as 2016.
Don't squander time. Work on projects. Limit net surfing and YouTube time.
Exercise. This is more important than before now that I have high-blood pressure.
Keep track of tasks/schedules.
Housekeeping. Throw or give away unused stuff. Replace worn-out stuff. Optimize storage space.
Expenses visibility. Keep track of major expenses at least. Need to account for 80 - 90% of the expenses.
Incremental clothing replacement. Buy new t-shirts.
Home Improvement Project. Will give my IVAR shelves one last upgrade. Replace spoilt light bulb socket. Run fibre cable through false ceiling.
Deadlines. I have an issue with non-work deadlines. I procrastinate and do things last minute. I missed several critical deadlines as a result, including overstaying! :-O
Vouchers. I dislike vouchers. I tend to forget about them until they expire. In fact, some did, but luckily the shop still accepted them. Some I used on the very last day.
The last mile. It takes 20+ minutes to walk to the neighbourhood centre and back. A personal transporter will cut down the time by half and more.
Dust. It is simply too dusty. I'm thinking of getting a robot sweeper to sweep the floor every day. In the mean time, I'm using the low-tech solution of closing the windows — it cuts the dust down by 80%!