I use a couple of scripts to help me encode whole directories at a time. Each directory contains one season of videos and can have its own set of options. There are pre/post processing options and steps. It can process just a subset of the files so that it can be parallelized.
It works pretty well, but it is not without its achilles heels.
Usually, I just need to set one dimension of the output resolution. However, if it is an odd-ball resolution or "non-standard" aspect ratio (AR), I need to set both dimensions, effectively hardcoding it.
I need to set explicit denoise parameters (if needed).
There is no episode-specific settings.
It does not handle multi-volume multi-discs hierarchy.
It cannot encode selected chapters only.
I should be able to specify the aspect ratio and set just one dimension of the output resolution. This is complicated by "Scope" anamorphic DVDs, i.e. 2.35:1 films.
It should use generic denoise parameters. HB 0.10 has a new denoiser. I hope to switch to it transparently.
It should handle multi-volume and multi-disc shows via nested directories.
It should support encoding chapters into separate files in an automated fashion.
It should support a source type parameter so that it can vary the CRF. I find that DVDs require slightly higher CRF (-2) than blu-rays.
x264 --help
Look for
Output bit depth: 10 (configured at compile time)
ffmpeg -h encoder=libx264
8-bit output:
supported pixel formats: yuv420p yuvj420p yuv422p yuvj422p yuv444p yuvj444p nv12 nv16
10-bit output:
supported pixel formats: yuv420p10le yuv422p10le yuv444p10le nv20le
HandBrake not supporting 10-bit x264 is a bummer. My encoding workflow is centered around it.
I have to explore a ffmpeg-centric workflow, but I have two major concerns: I-frames at chapter stops and denoising.
After using HandBrakeCLI 0.9.9 on RHEL 6.x for over two years, I finally decided to upgrade to 0.10.2. As usual, I followed the guide at CompileOnLinux.
I got an error quickly when I ran make, because /tmp was mounted as noexec. No problem, remount it as exec.
On re-running make, I got this:
ATTENTION! pax archive volume change required. Ready for archive volume: 1 Input archive name or "." to quit pax. Archive name >
It is sufficient to type '.' to continue. The reason is that fdkaac tries different methods to untar and it looks like the pax syntax has changed, at least on RHEL 6.6. I realized this when I went back and found that I could not compile HandBrakeCLI 0.9.9 too!
After this small hiccup, I ran into compilation errors. HandBrake now uses system libraries for lame, OGG, Theroa, Vorbis and x264, among others, so they have to be compiled and installed first. The list is in make/include/main.defs lines 43-53. The alternative is to move the list out of the if block.
After that, the build went well. It was able to encode videos (as expected).
Now that I was able to compile HandBrakeCLI, I decided to try out 10-bit encoding.
I compiled x264 with --bit-depth=10 and reinstalled the binary/libraries.
Then, I recompiled HandBrakeCLI.
Unfortunately, it did not work. When I tried to encode a video, it showed:
x264 [info]: profile High 10, level 1.3, 4:2:0 10-bit ... ... x264 [error]: This build of x264 requires high depth input. Rebuild to support 8-bit input. ...
The output file could not be played.
I compiled ffmpeg on the same system and it was able to do 10-bit encoding using libx264. This is to make sure I enabled 10-bit x264 correctly.
Finally, I posted a question on HandBrake's forum — really a last resort — and got the reply that HandBrake is 8-bit only. :duh: I should have asked first.
Currently, x265 encodes 15 - 20 times slower than x264. On my first test video, x264 encodes at 15 - 20 fps, but x265 encodes at a mere sub-1 fps! :-O
Assuming the quality is similar, the file size is 20% to 30% smaller.
On the other hand, I can't wait to try out the new denoise filter on my grainy videos. :lol:
Local stuff is expensive, no thanks to the triple threat of high rental, wages and utilities. There are three places to buy the same stuff for cheaper: Amazon, Taobao (淘宝网) and Malaysia.
Amazon is almost hassle-free, provided the item qualifies for free shipping. Taobao requires you 会读华语. For forays into Malaysia, you probably need a car, although it is possible to use public transport.
We are talking about savings of 20 - 40%. This can bring down the effective cost of living.
It is easy to over-insure. Plus, most insurance policies have a savings or investment portion. Agents like these because they get more commission. The returns are only projections, and wildly optimistic ones at that.
If it were up to me, I would buy only term insurance, and only life and critical illness. Often, I think we can get by with just the company's medical coverage.
And of course, keep yourself healthy!
Get the feeling that your savings never seem to grow? The answer is to set explicit goals.
The first goal is $10k/year, which translates to $833/month. If this is too easy, then set a goal of $15k/year ($1,250/month) or $24k/year ($2,000/month). It may sound easy to save more as you earn more, but expenses have a way to catch up with your income.
Note that I use dollars instead of percentages, because the latter can be a little abstract. But you should try to save up to 30% of your net income — it gets increasingly difficult after that.
On one hand, it can be difficult to save $800 - $1,200 per month. On the other hand, $10k - $15k does not seem significant. However, that is myopic. In three years, that $10k will be $30k — if you keep at it. And that is finally sufficient for some big ticket purchase or investment.
While it is tempting to save as much as possible, it should be for a specific goal and for a short period only, say six months. And the reason is as below.
It is possible to be too frugal or obsessed with achieving the FIRE (Financial Independence and Retire Early) dream that we forget to live in the present. Don't waste it. It will never come back again.
Sometimes, we spend so much time looking for ways and means to generate "passive" income that we forget that we need to work! We actually shortchange ourselves in two ways: by not improving our work-related skills, and not working to further our career.
A recent blog article mentioned these four "truly passive" income streams that the average Singaporean should take advantage of:
Are they really feasible?
There are a few conditions: that your flat have the desirability (e.g. location, cleanliness, and quietness), that you have a spare room and you don't mind the loss of privacy.
This can be a viable strategy. However, this should be considered before you buy a property. For example, you might want to buy a 4-bedroom flat near an MRT station.
It should be easy to rent out a common room for $500/month.
To a certain extent, low-risk dividend stocks can be treated as high-yield "bank interest". However, there is capital risk, which is often understated.
If you invest $30,000 at 5%, that is $125/month.
Credit cards can give up to 3% rebate, which means you "save" $24 for every $800. I quoted "save" because it is easy to overspend to try to get the rewards, when the simple alternative is not to spend at all.
I use $800 here because I think it is time to rethink your spending if it is exceeded.
If you put $30,000 in a bank that pays 2% annual interest, that is $50/month.
Not all streams are created equal. Some are fixed, while others scale. Rental and credit card rewards are more-or-less fixed. Dividend stocks and bank interest scale; the more you invest/save, the more the rewards.
Of these four streams, rental gives the best return out of the box and it takes a while — a long while — before dividend stocks and bank interest can match it.
If I were dictator-for-life for planet Earth, I would send robotic probes to these places right away:
Titan has a dense atmosphere and surface liquid! What are we waiting for?
Europa and Enceladus are frozen ice worlds. But beneath the surface is liquid water. What are we waiting for?
Venus is hell. We think it is due runaway greenhouse effect. All the more reason to learn about it to make sure we do not go that way!
Lastly, Triton, which is interesting because it is still geologically active. But, it is so far away and so cold, that it is lower in priority.
The New Horizons space probe, despite zipping through space at 14 km/s (you read that right), still took over nine and a half years to reach Pluto.
But it was there at last, on 14th July 2015.
Pluto, courtesy of New Horizons
This is history. This is also a testimonial of what humanity can do, if it chooses to focus on science.
Here's something I always keep in mind: "Dinosaurs ain't here anymore cos they didn't have a space program." Humanity must not make that mistake.
So, my "Atom server" is now running on life support on the EliteBook 2510p notebook. While it works, I kept thinking whether it was possible to use the Asus Eee PC 1215n netbook.
The reason why it did not work was because it booted off the internal HD and I was unable to enter the BIOS due to the faulty keyboard.
So, let's remove the internal HD?
It was quite simple — there is a pictorial step-by-step tutorial on the Internet. Once done, it did boot off USB. :thumbsup:
And I found that I was able to use an external USB keyboard to enter the BIOS. I am very certain it did not work the last time.
So, the netbook is still usable if I replace the keyboard — for US$23. While I'm at it, maybe I should swap out its 2x1 GB RAM and put in 2x2 GB RAM. And swap the HD for an SSD? :-O
That was when I started to pause, "wait a minute..."
And reminded myself: this machine is history. I do not want to use it anymore. While it is great that it still runs, it is at the end of its road. No upgrades or repairs.
Although Intel's doc says the Atom D525 can only address 4 GB of memory, it is apparently capable of 8 GB RAM. As a result, the 1215n can use 2x4 GB DDR3 SO-DIMM — at a slow-poke 800 MHz FSB.
Add in a SSD, and it would have been one usable 64-bit Windows 7 machine.
I never knew it could address so much memory. When the 1215n first came out, the limit was 2.74 GB even on a 64-bit OS, so it was not cost effective to upgrade to 4 GB RAM. Asus later updated the BIOS to allow the memory to be remapped — which people thought was impossible. But that is water under the bridge.
The 1215n netbook works perfectly as a headless file server, no upgrades or repairs needed. It runs the same "brain" off USB 2.0. It is workable, but the setup is fragile.
The most economical path forward is for me to clone the Ubuntu installation on the internal 2.5" 250 GB HD and run it off that.
The 2510p has only 2 GB RAM, so it is very slow running Windows 7 as it swaps a lot.
To my surprise, I just found that it can take a 4 GB DDR2 SO-DIMM (it has only one slot).
But that is way too late now. No more upgrades for this slow-poke notebook from 2007 either. Not to mention DDR2 RAM is now even more expensive than DDR3 RAM.
All storage, from HD, optical disc, flash to tape, all have error correction (EC). In fact, EC is required. If you knew just how unreliable our media is, you'll go back to pen-and-paper!
All transports, such as Ethernet, WiFi, USB and SATA, also have error detection and/or correction.
All except one — RAM.
RAM suffers from soft errors. They get hit by comsic rays and a bit is flipped. It has no effect if the RAM is unused, or it may not be obvious (data, disk cache), although the data is corrupted.
Data is hard to come by, especially for modern dense memory modules. Modern RAMs are a smaller target (good), but denser (less charge, bad) and have lower voltage (easier to flip, bad). But they are also supposedly designed to be more resilient (good).
Some reports 1 bit per 4 GB every 3 days. That seems kind of high. Others claim 1 bit per 1 GB every month. That seems reasonable. Some even claim 1 bit every few years! That seems pretty optimistic.
To me, the error rate should be a function of surface area, density and layout/orientation. 8 pieces of 1 GB module has 8x the error rate of one 8 GB module if they are spread out like a solar cell collector.
So, we know that the error rate is pretty low, thus desktop PCs and notebooks all use non-ECC RAM. But if a computer is run 24/7, it will get hit eventually. Servers that run 24/7 use ECC RAM as standard.
The smallest RAM module that has ECC is 4 GB, but 8 GB is much more common. This does not necessarily mean 4 GB RAM does not need ECC, but that servers, where ECC is commonly used, require large RAMs.
So, when do we need ECC?
IMO, we need ECC everywhere. Silent errors should not be tolerated. Currently, people blame software when computers crash. But is it always true?
Today, Intel enables ECC only on its low-end (Celeron, Pentium, i3) and its high-end CPUs (Xeon), and the C-series workstation motherboard is required. The cheapest ECC option with Intel CPU:
CPU | Celeron G1620 | US$46 |
M/B | Asrock E3C204 | US$145 |
RAM | 4 GB | US$35 |
US$226. Not cheap, but not exactly unaffordable either.
The second thing is that we need more data. We should monitor the number of soft errors for computers with ECC RAM.
The cheapest 2 GB RAM on Amazon is US$14 (+/-US$1). If you only want 2 GB, that is about the only choice.
There are a few combinations for 4 GB RAM:
Size | MHz | CL | Volts | Price |
---|---|---|---|---|
2 GB | 1600 | CL11 | 1.35V | US$14.74 |
4 GB | 1600 | CL11 | 1.5V | US$24.74 |
4 GB | 1600 | CL11 | 1.35V | US$24.99 |
4 GB | 1600 | CL9 | 1.35V | US$27.74 |
2x2 GB | 1600 | CL9 | 1.5V | US$30.99 |
Is it worth paying US$3 more for CL9? And then another US$3.25 for dual channel?
Dual channel, even at CL11, should outperform single channel at CL9. But is it worth US$1.74?
8 GB RAM:
Size | MHz | CL | Volts | Price |
---|---|---|---|---|
8 GB | 1600 | CL11 | 1.5V | US$44.74 |
8 GB | 1600 | CL11 | 1.35V | US$45.74 |
8 GB | 1600 | CL9 | 1.35V | US$49.74 |
4x2 GB | 1600 | CL9 | 1.35V | US$51.74 |
Price difference is US$7 for the fastest to slowest RAM.
I would get 1.35V over 1.5V, as it is only US$1 different. If I want CL9, I would pay US$2 more for dual channel.
The price difference between the slowest 2 GB RAM config and the fastest 8 GB RAM config is almost US$38.
Based on several reviews, there is significant difference in only three kinds of workload: memory benchmarks (10+%), IGP (10%) and file compression (5+%). The rest? 1-3%.
Power usage taking power efficiency (estimated) into account:
Power | Eff | Cost | 10W | 20W | ||
---|---|---|---|---|---|---|
Eff | Actual | Eff | Actual | |||
300W | 75% | $0 | 60% | 16.7W | 65% | 30.8W |
300W | 80% | $30 | 70% | 14.3W | 75% | 26.7W |
250W | 85% | $65 | 75% | 13.3W | 80% | 25W |
90W | 86% | $130 | 80% | 12.5W | 86% | 23.3W |
My old Atom D510 draws 20W with one HDD (my guess; I have never measured it). The new board should draw just 10W.
Cost per year for 24/7 operation with electricity at 22.41 cents/kWh:
Power | 10W | 20W |
---|---|---|
300W | $32.79 | $60.46 |
300W | $28.07 | $52.42 |
250W | $26.11 | $49.08 |
90W | $24.54 | $45.74 |
Years to break-even:
Power | 10W | 20W | ||
---|---|---|---|---|
Savings | Years | Savings | Years | |
300W | $0 | 0 | $0 | 0 |
300W | $4.72 | 6.36 | $8.04 | 3.73 |
250W | $6.68 | 9.73 | $11.38 | 5.71 |
90W | $8.25 | 15.76 | $14.72 | 8.83 |
The answer is clear. An energy efficient power supply costs too much to make sense.
Electricity cost will increase, so break-even point will shorten. We are moving up from the low of 20.87 cents/kWh in Apr 2015. It hit 28.78 cents/kWh in Apr 2012. That is about 30% more expensive.
Cost per year with electricity at 26 cents/kWh:
Power | 10W | Savings | Years |
---|---|---|---|
300W | $38.04 | $0 | 0 |
300W | $32.57 | $5.47 | 5.48 |
250W | $30.29 | $7.75 | 8.39 |
90W | $28.47 | $9.57 | 13.58 |
Still takes a mighty long time to break-even.
After weighing the pros and cons, I decided I would buy the N3150. The main reason is that it is newer and more future-proof:
The Asrock N3150-ITX has 4 SATA-III ports, 6 USB 3.0 ports and 6 USB 2.0 ports, and can use up to 16 GB RAM. Very interesting.
Now to see if I can find it!
My plan B is Asrock Q1900-ITX. It has 2 SATA-III ports and 4 USB 3.0 ports.
I need only 2 GB RAM, but I may get a pair for dual channel boost. I expect real-world performance difference to be 1% — I don't use the IGP.
But what kind of RAM?
The common RAMs are 1066 MHz at CL7 (6.567ns), 1333 MHz at CL9 (6.752ns) and 1600 MHz at CL11 (6.875ns). Slower RAM seems to be better. Hmm...
The "best" yet affordable I've managed to find is 1666 MHz at CL9 (5.402ns). The timing for Corsair's Vengeance is 9-9-9-24, edging Kingston's HyperX Impact at 9-9-9-27. However, these are 4 GB and above.
Again, I think real-world performance will only be different by 1%.
And if possible, I want DDR3L (1.35V) instead of DDR3 (1.5V).
Who knew there are so many things to look out for in RAM?
I just found that mATX power supplies like mine are inefficient at low loads! :cry:
A typical power supply is 75% efficient at 20-80% load. A 80 Plus certified power supply is at least 80% efficient.
Given that my power supply is 300W and the expected load is only 20-30W, that is just 10% load! Power efficiency could be just 60-70%!
Unfortunately, it is not a simple matter of using a more efficient power supply. First, it is almost impossible to find a ATX power supply under 400W now. Even if it is rated Titanium — which costs a bomb — it is only rated at 90% efficiency at 10% load (40W).
Then, I can use a notebook adapter (80-90+% efficient) and picoPSU (96% efficient). But a picoPSU is not cheap.
I need to calculate how long it will take for a new efficient power supply to break-even. :lol:
Year | Tech | TDP | CPU | Speed | Cores/HT | L2 | Mem | Price |
---|---|---|---|---|---|---|---|---|
2010 q1 | 45 nm | 13 W | Atom D510 | 1.66 GHz | 2/4 | 1 MB | 4 GB | $63 |
2013 q4 | 22 nm | 10 W | Celeron J1800 | 2.41 - 2.58 GHz | 2/2 | 1 MB | 8 GB | $72 |
Celeron J1900 | 2 - 2.42 GHz | 4/4 | 2 MB | $82 | ||||
Pentium J2900 | 2.41 - 2.66 GHz | 4/4 | 2 MB | $94 | ||||
2015 q1 | 14 nm | 6 W | Celeron N3050 | 1.6 - 2.16 GHz | 2/2 | 2 MB | 8 GB | $107 |
Celeron N3150 | 1.6 - 2.08 GHz | 4/4 | 2 MB | $107 | ||||
Pentium N3700 | 1.6 - 2.4 GHz | 4/4 | 2 MB | $161 |
Given that the N3150 is a souped-up N3050 for the same price, I'm not sure why anyone would buy the N3050.
The Celeron N3050, part of the Braswell family, is quite new. For example, Asus just unveiled their mini-ITX m/b a few days ago!
Surprisingly, the N3050 is slower than J1800 by about 10% in CPU performance. It trades off with a 40% power reduction.
Finally, all N3050 m/b are fanless. J1800 still requires a fan, but it should be inaudible when enclosed.
CPU | Mem | Channels | Type | Speed |
---|---|---|---|---|
Atom D510 | 4 GB | 1 | DDR2 | 667/800 |
Celeron J1x00 | 8 GB | 2 | DDR3L | 1333 |
Celeron N3x50 | 8 GB | 2 | DDR3L | 1600 |
I'm fine with either the J1800 or N3150. I would prefer the N3150, but given the base price (US$107 vs US$72), the m/b will be more expensive. Retailers should be trying to clear the old J1800 stock, so I suspect it can be had for a very good price.
The J1800 is twice as fast as the D510, so it should be plenty fast! :lol:
2 GB RAM is sufficient, but 4 GB seems to be the smallest available. I may use 2x 4 GB RAM modules for dual channel operation, which gives up to 5% performance boost at a cost of 1-2W.
I want 2 SATA ports (SATA-II will do for mechanical disks), 2 USB 3.0 ports and 1 Gigabit Ethernet port.
Video can be either VGA or HDMI.
I'm not interested in the graphics processor or 3D performance at all.
Something that all 24/7 servers should have: ECC RAM. The commonly quoted error rate is 1 bit per 4 GB every 3 days. That seems pretty high!
Unfortunately, for Intel, only Celeron/Pentium G series and Xeon CPUs support ECC, and a C series workstation-class motherboard is needed.
Like a star when it runs out of energy, the Atom server was living on borrowed time. The stop-gap measure lasted just one day. It died for good yesterday.
It might take a couple of days — or a few weeks — to look for its replacement, but I want it to limp along in the mean time.
My first choice was an unused Core PC from 2007. It turned out this PC was in an even worse state: it could not even boot up!
Next, I tried to clean the layer of dust off the Atom server's motherboard. It seemed to last a bit longer before it rebooted. So, this failed as well.
As a last-ditch attempt, I used the Core PC's power supply. Nope, the Atom server still rebooted spontaneously.
Then, I hit upon the bright idea of putting the HD in an external USB enclosure and booting it off a notebook! Really, the HD is the server. Who cares about the machine?
I did not have a spare external USB enclosure, so the only way was to "loan" one from my 1 TB Seagate Desktop Expansion HD. This drive is already filled to the brim with backup data, so it is very rarely accessed.
It was difficult to pry the enclosure open. I marvelled at the mechanical ingenuity that enabled it to be held tight without using any screws. This is an innovation alright.
The next thing was to find a notebook. I called upon my retired Asus 1215n netbook. It still worked, but it booted the internal HD. I was unable to enter the BIOS Setup because the F2 key was spoilt. Using a USB keyboard did not work. There is a lesson here.
Last choice: my glacially slow but still-in-use EliteBook 2510p. It worked!
It takes much longer to boot up, though. Previously, it took only 10+ seconds. Now, it takes well over a minute. Is USB 2.0 really that slow?
I was half-expecting it not to mount the partitions, because they are now /dev/sdb. It is a good thing I used UUID to identify the partitions, so it works. :nod:
Bad news: there is no network connection. :-O
Luckily, the reason is that it thinks this is a new network adapter and maps it as eth1. I simply added the eth1 settings to /etc/network/interfaces.
Reboot and voila, the "Atom" is open for business!
Ever since I shifted my Atom server physically to another location, it has hung a couple of times.
A few days ago, it rebooted a number of times before it managed to boot successfully.
Yesterday, it finally failed. It would enter a reboot loop every 10-15s. Still, it was enough time for me to get into the BIOS and see that the CPU was running at 87-89 degree celsius.
That seemed rather high. It is only supposed to reach this temperature when the CPU is at 100% load, not when it is idle.
The motherboard comes with a CPU fan, but I detached it a couple of years ago because it was too noisy. It seemed to work, anyway.
Until now. I reconnected the CPU fan and checked the temperature again. It is a steady 63 degree celsius. Wow, so much difference!
It seems to be working fine now. :lol:
This server has been running 24/7 for 5 years and is showing its age. Also, it could be on its last leg. 63 degree celsius still seems very high for a low-power Atom CPU.
I would love to run a 5th gen Core-M CPU. They are both fast (compared to Atom) and energy efficient.
I converted my Linux workstation to use multiple partitions last year — not long after I got it, iirc.
FS | Space | %Use |
---|---|---|
/ | 25 GB | 40% |
/var | 4 GB | 44% |
/var/log | 2 GB | 58% |
/var/tmp | 2 GB | 1% |
/tmp | 10 GB | 1% |
unalloc | – | – |
swap | 32 GB | – |
/mnt/work | 268 GB | 60% |
/mnt/data | 586 GB | 90% |
/tmp is mounted as tmpfs. It is fine here because the workstation has 32 GB RAM. There is no performance difference.
I keep a large swap (same as RAM size) because I wanted to hibernate the workstation. I could not get it to work reliably.
/mnt/data is for large and static files and /mnt/work is for smaller and frequently changed files.
Design intent for work:
The size is not picked randomly either:
A lot of thought goes into this. :lol:
Tweaked from my current workstation disk allocation.
FS | Space | %Use |
---|---|---|
/ | 25 GB | 41% |
/var | 4 GB | 66% |
/var/log | 4 GB | 22% |
/var/tmp | 2 GB | 1% |
/tmp | 8 GB | 1% |
unalloc | 9 GB | – |
swap | 8 GB | – |
/mnt/work | 266 GB | 20% |
/mnt/data | 592 GB | 89% |
The first five partitions, plus swap and the unallocated space, add up to 60 GB. I keep some free space around in case I need to expand some partitions in the future.
This time, I decided not to mount /tmp as tmpfs, because I can only set aside 6 GB for it — the server has only 16 GB RAM — and more importantly, it did not make any performance difference at all.
It takes several steps to convert from one single partition to multipe partitions.
Mission accomplished!
The 80th anniversary edition
Especially when it costs just US$15.99 with free shipping. I wouldn't have bought it at the local retail price of S$49.90 ($39.92 after 20% off).
I very much prefer the plainness of the retro version. It is a throwback to a simpler era.
The game, however, is awful. :lol:
A game of Monopoly can be divided into four phases:
In the first phase, the acquisition, players are just moving around the board slowly and buying up properties. This phase can take a while, but nothing interesting happens. Sometimes, a player may get lucky and get a complete set all by himself, after which he may start to build houses and get a headstart over the others. But his lead may be short-lived, because others may not be as willing to trade with him later.
Once players accumulate enough properties for a multi-way trade to form complete sets, it is time for the big trade! This may be one intensive negotiation session, or it may take place over several smaller sessions. This phase is over when 5-6 of the 9 sets (including railroads but excluding utilities) are completed.
Next, players "power up" their properties with houses. This will happen very quickly if players are flushed with cash. There is definitely a first-mover advantage here, given the limited houses.
There are three rules: one house is better than none, three houses is the sweet spot, and stop at four houses.
Once the houses are more-or-less gobbled up, there is nothing left to do but to see who is unlucky enough to land on the "traps". This phase has very high positive feedback. Once a player needs to sell houses, or worse, mortgage his properties, to pay the rent, it is basically game over for him.
You need to run Windows Update at least three times.
First, it will download a number of updates, and you need to restart your PC.
Then, it will download Windows 8.1 Update together with a few updates. You will need to restart your PC again.
Finally, it will download a huge bunch of updates that takes "forever" (two hours) to install. Strangely, the CPU is only 30% loaded and there is no disk nor network activity. You will need to restart your PC.
Altogether, you need to download some 2.5 GiB worth of updates.
Installing Windows 8.1 and bringing it up-to-date is a huge time sink. I have done this several times since Nov 2014. If only it were easy to slipstream updates... like good old Windows XP.
One possible workaround is to install Windows 8.1 in a Virtual Machine, keep it unactivated and leave all settings untouched, and keep it up-to-date. When it is needed, just make a system image of it.
As a rule, I do not like one single partition.
I arrive at this new partition scheme after some trial-n-error:
Home | Work | ||
---|---|---|---|
C: | App | 60 GiB | 80 GiB |
D: | Cache | 15 GiB | 20 GiB |
E: | Data | rest | rest |
The App drive contains the swap and hibernation files, so it is effectively around 10 GiB smaller.
The main motive is to reduce file system fragmentation. The App drive should be mildly fragmented, the Cache drive terribly, and the Data drive almost not.
There is no such thing on a SSD, but it still helps to have multiple partitions. It is easier to clear the cache or reinstall the OS, and the data is segregated clearly.
Our z620 Linux workstations are supremely fast: two Xeon E-2670 v2 CPUs @ 2.50 GHz for a total of 40 logical cores :-O, with 32 GB RAM.
But they are let down by the spinning 7200 RPM HD. It is fast, but it pales besides a SSD.
Until now. Some of us will have a 256 GB SSD. Surprisingly, it won't help with compilation, which is CPU-bound, but it will help with disk intensive operations — especially ones that involve random access to thousands of files.
I also found that my EliteBook 2560p notebook is only due for replacement in one year's time! IIRC, I got it in April 2012, so that means the replacement period is now 4 years!
I got an additional 4 GB RAM module and a 256 GB SSD. I would like a 8 GB RAM module, but it costs 4x the price! 4 GB is at the borderline for everyday use, but 8 GB is sufficient.
(Or I can switch to Windows 8.1, which is more memory efficient. Or I can do both. ;-))
I got the SSD because my HD is dying — it has the occassional terrifying clicking sound. One day, it will be the click-of-death.
Why SSD? Because every notebook should have an SSD! :-P The notebook slows to a crawl whenever it needs to access the disk — especially random access. Programs take tens of minutes to install, throttled by the disk. The 2.50 GHz i5-2520M CPU, still pretty decent today, is sitting idle.
My workhorse Windows PC, a xw8600 ex-Linux workstation, has been running well for over a year. However, I routinely hit its 4 GB RAM limit and it slows down once it starts to swap. I run many programs on it — it has three monitors and eight virtual desktops. It is almost never shutdown or even rebooted, because it takes a while to get everything up and running again.
The three biggest memory hogs are Firefox (by far), Outlook and the ALM client. They inevitably leak memory over time, although the current versions are much better than their earlier incarnations.
So, I asked our local IT support if they have some spare DDR2 RAM modules. They do not. However, they have something better: z600 workstations!
So, I got one. :-D
xw8600 | z600 | |
---|---|---|
CPU | X5450 | X5650 |
Speed | 3.0 GHz | 2.67 GHz |
Total cores | 8 | 24 |
RAM | 4 GB | 12 GB |
Installation is as simple as moving the HD over.
The objective of this little exercise is to reduce the lag that lowers the productivity of our day-to-day work.
I'm going to estimate how much of my monthly spending qualify for the 3% rebate. I'm guessing it is S$150 to S$300.
I'm going to stop once I charge S$300 to S$450 of non-qualifying items to the card.
By doing this, I should get a rebate of 0.75% to 1.5% — or 0.3% if I miss S$600.
Trying to maximize rebate is good, but what I really need to do is to find out why my CC expenses are so crazily high. I used to be able to spend S$500 or less. :sweat:
I switched to using the OCBC 365 credit card in the belief that most of my spending qualifies for the 3% rebate. I was mistaken.
Month | Expenses | Rebate | %age | Charges |
---|---|---|---|---|
Aug 14 | $2,532.41 | $51.24 | 2.02% | |
Sep 14 | $1,071.56 | $15.69 | 1.46% | |
Oct 14 | $1,356.13 | $23.20 | 1.71% | |
Nov 14 | $3,205.21 | $80.00 | 2.50% | |
Dec 14 | $2,397.96 | $25.40 | 1.06% | $159.90 + $60 |
Jan 15 | $2,277.28 | $45.72 | 2.01% | -$60 |
Feb 15 | $1,454.72 | $15.82 | 1.09% | $86.42 + $60 |
Mar 15 | $582.26 | $1.75 | 0.30% | -$86.42 + -$60 |
Apr 15 | $1,275.18 | $13.06 | 1.02% |
What is worse is that I forgot to pay my credit card — twice! — and was slapped with the hefty late charge and interest.
OCBC will waive the late charge of S$60, but it will almost never waive the interest charge.
I was in Malaysia on both occasions (Dec 14 and Feb 15) and overlooked the due date. OCBC did not accept my reasons.
I finally got them to waive the interest charge by using GIRO to pay my credit card bill, hereby ensuring I will never be late again.
The single interest charge of S$159.90 basically wipes out most of my cash rebate. And that makes me very sour of the card — and the bank.
Going forward, I need to consider two things. First, am I able to meet the minimum S$500 spending to earn the 0.5% bonus interest on my 360 Account? That translates to S$5 every S$1,000 per annum.
Second, am I able to hit S$600 to get 3% rebate — with sufficient qualifying items? I need a better strategy.
So far, my CC expenses are frightening! :-O
On one of my machines, the 50 GiB OS partition is always on the brink of being full. It has just 3 to 5 GiB free.
Finally, I needed to install Visual Studio 2013 and there was just not enough space.
I had no choice but to resize it. There is a giant 415 GiB data partition adjacent to it, of which just 28 GiB is used.
Windows does not provide a way to move a partition, so I used GParted Live.
I shrunk the data partition by 30 GiB and moved it "to the right" to make space. It took 5 hours.
I have to ask, why?
Why couldn't it stop once it moved the used data? It should be smart enough to skip the free space.
If I had first shrunk the data partition to 30 GiB, move it to the right, then expand it back, it would have taken maybe just 15 minutes.
50 GiB is not sufficient for a Windows 7 "development" machine.
Resizing partitions is a slow operation, but it can sometimes be optimized — manually.
I noticed since a couple of months ago that the network throughput of my 24/7 Atom server was limited to 1 MB/s.
It was strange. First, I attributed it to my 2.4 GHz WiFi. Later, when I switched to the 5 GHz WiFi, I attributed it to the mobile app or the router.
I finally knew something was wrong when high-motion scenes of a 720p video could not play smoothly from my server. That should not happen.
I checked the network tab of my notebook's Task Manager. Throughput was capped at 9.8 Mbps, despite being connected at 300 Mbps.
Suddenly, it struck me. The 9.8 Mbps rate is awfully close to 10BaseT. Surely I'm not running that slowly? :-O
I ran ethtool eth0 on my server and got
Speed: 10Mb/s
Oops. :sweat:
Restoring it to full speed:
ethtool -s eth0 speed 100 duplex full
I don't know if this will stick after power cycle. (Update: it does not.)
Now the network throughput reaches 20.x Mbps — still on the low side. The same high motion scenes now play better, but there is still occasional stuttering.
One mystery remains: when was the speed reduced and why? Was it due to Ubuntu 14.04, the router or cable?
Update: I used a new cat 5e cable and it showed 100 Mb/s. I threw the old cable away.