苹果诊断与用量 coretimee什么意思

which unfortunately does not in my view address the current issues with lack of privacy of the .nz namespace. The following is my submission on the matter.
I have strong concerns with the proposed policy changes to .nz WHOIS information and am writing to request you reconsider your stance on publication of WHOIS information.
#1: Refuting requirement of public information for IT and business related contact
My background is working in IT and I manage around 600 domains for a large NZ organisation. This would imply that WHOIS data would be useful, as per your public good statement, however I don’t find this to be correct.
My use cases tend to be one of the following:
1. A requirement to get a malicious (phishing, malware, etc) site taken down.
2. Contacting a domain owner to request a purchase of their domain.
3. A legal issue (eg copyright infringement, trademarks, defamation).
4. Determining if my employer actually owns the domain marketing is trying to use today. :-)
Of the above:
1. In this case, I would generally contact the service provider of the hosting anyway since the owners of such domains tend to be unreliable or unsure how to even fix the issue. Service providers tend to have a higher level of maturity of pulling such content quickly. The service provider details can be determined via IP-address lookup and finding the hosting provider from there, rather than relying on the technical contact information which often is just the same as the registrant and doesn’t reflect the actual company hosting the site. All the registrant information is not required to complete this requirement, although email is always good for a courtesy heads up.
2. Email is satisfactory for this. Address & phone is not required.
3. Given any legal issue is handled by a solicitor, a legal request could be filed with DNC to release the private ownership information in the event that the email address of the domain owner was non responsive.
4. Accurate owner name is more than enough.
#2: Internet Abuse
I publish a non-interesting and non-controversial personal blog. I don’t belong to any minorities ethnic groups. I’m born in NZ. I’m well off. I’m male. The point being that I don’t generally attract any kind of abuse or harassment that is sadly delivered to some members of the online community.
However even I end up receiving abuse relating to my online presence on occasion in the form of anonymous abusive emails. This doesn’t phase me personally, but if I was in one of the many online minorities that can (and still do) suffer real-word physical abuses, I might not be so blasé knowing that it doesn’t take much to suddenly turn up at my home and throw abuse in person.
It’s also extremely easy for an online debate to result in a real world incident. It isn’t hard to trace a person’s social media comments to their blog/website and from there, their real world address. Nobody likes angry morons abusing them at 2am outside their house with a tire iron about their Twitter post.
#3. Cold-blooded targeting
I’ve discussed my needs as an IT professional for WHOIS data, the issue of internet abuse. Finally I wish to point out the issue of exposing one’s address publicly when we consider what a smart, malicious player can do with the information.
* With a target’s date of birth (thanks Facebook!) and their address (thanks DNC policy!) you’re in the position to fake someone’s identity for a number of NZ organisations including insurance and medical whom use these two (weak) forms of validation.
* Tweet a picture of your coffee at Mojo this morning? Excellent, your house is probably unoccupied for 8 hours, I need a new TV.
* Posting blogs about your amazing international trip? Should be a couple good weeks to take advantage of this – need a couch to go with that TV.
* Mentioned you have a young daughter? Time to wait for them at your address after school events and intercept there. Its not hard to be “Uncle Bob from the UK to take you for candy” when you have address, names, habits thanks to the combined forces of real world location and social media disclosure.
Not exposing information that doesn’t need to be public is a text-book infosec best practise to prevent social engineering type attacks. We (try to be) cautious around what we tell outsiders because lots of small bits of information becomes very powerful very quickly. Yet we’re happy for people to slap their real world home address on the internet for anyone to take advantage of because no harm could come of this?
To sum up, I request the DNC please reconsider this proposed policy and:
1. Restrict the publication of physical address and phone numbers for all private nz domains. This information has little real use and offer avenues for very disturbing and intrusive abuse and targeting. At least email abuse can be deleted from the comfort of your couch.
2. Retain the requirement for a name and contact email address to be public.However permit the publicly displayed named to be a pseudonym to preserve privacy for users whom consider themselves at risk, with the owner’s real/legal name to be held by DNC for legal contact situations.
I have no concerns if DNC was to keep business-owned domain information public. Ltd companies director contact details are already publicly available via the companies registry, and most business-owned domains simply list their place of business and their reception phone number which doesn’t expose any particular person. My concern is the lack of privacy for New Zealanders rather than businesses.
Thank you for reading. I am happy for this submission to be public.
Posted in , ,
Apple MacOS’s Time Machine feature is a great backup solution for general desktop use, but has some annoying limitations such as only working with either locally attached storage devices or with Apple’s
Whilst the Time Capsules aren’t bad devices, they offer a whole bunch of stuff I already have and don’t need – WiFi access point, ethernet router, and network attached storage and they’re not exactly cheap either. They also don’t help anyone wanting to backup to an off-site cloud server/VPS via a VPN.
So instead of a Time Capsule, I’m using a project called
to allow a GNU/Linux server to provide an AFP file share to MacOS which acts as a Time Machine suitable target.
There’s an annoyance with Time Machine where it only officially works with AFP shares specially flagged as “Time Machine” shares. So whilst Apple has embraced SMB2 as the file sharing protocol of future use, you can’t use SMB2 for Time Machine backups (Well technically you can by enabling unsupported volumes in MacOS, but then you lack the ability to restore from backup via the MacOS recovery tools).
To make life easy, I’ve that install netatalk and configures a Debian GNI/Linux server to act as a Time Capsule for all local users.
After installing the Puppet module (r10k or puppet module tools), you can simply define the directory and how much space to report to each client:
class { 'timemachine':
=& '/mnt/backup/timemachine',
volsizelimit =& '1000000', # 1TB per user backing up
To setup each MacOS machine, you will need to first connect to the share using Finder. You can do this with Finder -& Go -& Connect to Server and then entering afp://SERVERNAME and authenticating with your PAM credentials for the server.
After connecting, the share should now appear under Time Machine preferences. If you experience any issues connecting, check the /var/log/afpd.log file for debug information on the server – common issues include not having created the directories for the shares or having incorrect permissions on them.
Posted in ,
I recently obtained an iPhone and needed to connect it to my VPN. However my existing VPN server was an OpenVPN installation which works lovely on traditional desktop operating systems and Android, but the iOS client is a bit more questionable having last been updated in September 2014 (pre iOS 9).
I decided to look into what the “proper” VPN option would be for iOS in order to get something that should be supported by the OS as smoothly as possible. Last time I looked this was full of wonderful horrors like PPTP (not actually encrypted!!) and L2TP/IPSec (configuration hell), so I had always avoided like the plague.
However as of iOS 9+, Apple has implemented support for
VPNs which offers an interesting new option. What particularly made this option attractive for me, is that I can support every device I have with the one VPN standard:
IKEv2 is built into iOS 9 and MacOS El Capitan.
IKEv2 is built into Windows 10.
Works on Android with (hopeful for native integration soonish?).
Naturally works on GNU/Linux.
Whilst I love OpenVPN, being able to use the stock OS features instead of a third party client is always nice, particularly on mobile where power management and background tasks behaviour can be interesting.
IKEv2 on mobile also has some other nice features, such as , which makes it very seamless when switching between different networks (like the cellular to WiFi dance we do constantly with phones/tablets). This is something that OpenVPN can’t do – whilst it’s generally fast and reliable at establishing a connection, a change in the network means issuing a reconnect, it doesn’t just move the current connection across.
Given that I run GNU/Linux servers, I went for one of the popular IPSec solutions available on most distributions – .
Unfortunately whilst it’s technical capabilities are excellent, it’s documentation isn’t great. Best way to describe it is that every option is documented, but what options and why you’d want to use them? Not so much. The “left” vs “right” style documentation is also a right pain to work with, it’s not a configuration format that reads nicely and clearly.
Trying to find clear instructions and working examples of configurations for doing IKEv2 with iOS devices was also difficult and there’s some real traps for young players such as generating SHA1 certs instead of SHA2 when using the tools with defaults.
The other fun is that I also wanted my iOS device setup properly to:
Use certificate based authentication, rather than PSK.
Only connect to the VPN when outside of my house.
Remain connected to the VPN even when moving between networks, etc.
I found the best way to make it work, was to use
to generate a .mobileconfig file for my iOS devices that includes all my VPN settings and certificates in an easy-to-import package, but also (critically) allows me to define options that are not selectable to end users, such as on-demand VPN establishment.
After a few nights of messing around and cursing the fact that all the major OS vendors haven’t just implemented OpenVPN, I managed to get a working connection. To avoid others the same pain, I considered writing a guide – but it’s actually a really complex setup, so instead I decided to write a Puppet module ( / or ) which does the following heavy lifting for you:
Installs StrongSwan (on a Debian/derived GNU/Linux system).
Configures StrongSwan for IKEv2 roadwarrior style VPNs.
Generates all the CA, cert and key files for the VPN server.
Generates each client’s certs for you.
Generates a .mobileconfig file for iOS devices so you can have a single import of all the configuration, certs and ondemand rules and don’t have to have a Mac to use Apple Configurator.
This means you can save yourself all the heavy lifting and setup a VPN with as little as the following Puppet code:
class { 'roadwarrior':
manage_firewall_v4 =& true,
manage_firewall_v6 =& true,
vpn_range_v4
=& '10.10.10.0/24',
vpn_route_v4
=& '192.168.0.0/16',
roadwarrior::client { 'myiphone':
ondemand_connect
ondemand_ssid_excludes =& ['wifihouse'],
roadwarrior::client { 'android': }
The above example sets up a routed VPN using 10.10.10.0/24 as the VPN client range and routes the 192.168.0.0/16 network behind the VPN server back through. (Note that I haven’t added masquerading options yet, so your gateway has to know to route the vpn_range back to the VPN server).
It then defines two clients – “myiphone” and “android”. And in the .mobileconfig file generated for the “myiphone” client, it will specifically generate rules that cause the VPN to maintain a constant connection, except when connected to a WiFi network called “wifihouse”.
The certs and .mobileconfig files are helpfully placed in
/etc/ipsec.d/dist/ for your rsync’ing pleasure including a few different formats to help load onto fussy devices.
Hopefully this module is useful to some of you. If you’re new to Puppet but want to take advantage of it, you could always check out my
If you’re not sure of my Puppet modules or prefer other config management systems (or *gasp* none at all!) the Puppet module should be fairly readable and easy enough to translate into your own commands to run.
There a few things I still want to do – I haven’t yet done IPv6 configuration (which I’ll fix since I run a dual-stack network everywhere) and I intend to add a masquerade firewall feature for those struggling with routing properly between their VPN and LAN.
I’ve been using this configuration for a few weeks on a couple iOS 9.3.1 devices and it’s been working beautifully, especially with the ondemand configuration which I haven’t been able to do on any other devices (like Android or MacOS) yet unfortunately. The power consumption overhead seems minimal, but of course your mileage may vary.
It would be good to test with Windows 10 and as many other devices as possible. I don’t intend for this module to support non-roadwarrior type configs (eg site-to-site linking) to keep things simple, but happy to merge any PRs that make it easier to connect more mobile devices or branch routers back to a main VPN host. Also happy to merge PRs for more GNU/Linux distribution support- currently only support Debian/Ubuntu, but it shouldn’t be hard to add others.
If you’re on Android, this VPN will work for you, but you may find the OpenVPN client better and more flexible since the Android client doesn’t have the same level of on demand functionality that iOS has built in. You may also find OpenVPN a better option if you’re regularly using restrictive networks that only allow “HTTPS” out, since it can be run on TCP/443, whereas StrongSwan IKEv2 runs on UDP port 500 or 4500.
Posted in , ,
, , , , , , ,
Posting this here since I’ve filed a disclosure with Ubiquiti on Feb 28th 2016 and had no acknowledgment other than to be patient. But two months of not even looking at what is quite a serious issue isn’t acceptable to me.
I do really like the Unifi Video product (hardware + software) so it’s a shame it’s let down by poor transport security and slow addressing of security issues by the vendor. I intend to write up a proper review soon, but it was more important to get this report out first.
My mitigation recommendation is that you only communicate with your Unifi Video systems via secure encrypted VPN, eg IKEv2 or OpenVPN until such time that Ubiquiti takes this seriously and patch their shit.
28th Feb 2016 – Disclosure of issue via HackerOne (#119121).
There is a SSL/TLS certificate validation flaw on the Unifi Video application for Android and iOS where it accepts any self-signed certificate served by the Unifi Video server silently allowing a malicious third party to intercept data.
Unifi Video 3.1.2 (server)
Android app 1.1.3 (Build 153)
iOS app 1.1.7 (Build 1.1.48)
Any man-in-the-middle attacker could intercept customers using Unifi Video from mobile devices by replacing the secure connection with their own self-signed certificate, capturing login password, all video content and being able to use this in future to view any cameras at their leisure.
Steps to reproduce:
Perform clean installation of Unifi Video server.
Connect to the web interface via browser. Self-signed cert, so have to accept cert.
Connect to NVR via the Android app. No cert acceptance needed.
Connect to NVR via the iOS app. No cert acceptance needed.
Erase the previously generated keystore on server with: echo -n “” & /usr/lib/unifi-video/data/keystore
Restart server with: /etc/init.d/unifi-video restart
We now have the server running with a new cert. You can validate that, by refreshing the browser session and it will require re-acceptance of the new self-signed certificate and can see new generation time & fingerprint of new cert.
Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
Go get some gin and cry :-(
Whilst I can understand an engineer may have decided to develop the mobile apps to always accept a cert the first time it sees it to simplify setup for customers whom will predominately have a self-signed cert on Unifi Video server, it must not accept subsequent certificate changes without warning to the user. Failing to do so, allows a MITM attack on any insecure networks.
I’d recommend a revised workflow such as:
User connects to a new NVR for the first time. Certificate is accepted silently (or better, shows the fingerprint, aka SSH style).
Mobile app stores the cert fingerprint against the NVR it connected to.
Cert gets changed – whether intentionally by user, or unintentionally by attacker.
Mobile apps warn that the NVR’s cert fingerprint has changed and that this could be dangerous/malicious. User has option of selecting whether they trust this new certificate or whether they do not wish to connect. This is the approach that web browsers take with changed self-signed certificates.
This would prevent silent MITM attacks, whilst will allowing a cert to be updated/changed intentionally.
Communication with Ubiquiti:
12th March 2016 Jethro Carr
hi Ubiquiti,
Can I please get an update – do you confirm there is an issue and have a timeframe for resolution?
15th March 2016 Ubiquiti Response
Thank you for submitting this issue to us, and we apologize for the delay. Since launching with HackerOne we have seen many issues submitted, and we are currently working on reducing our backlog. We appreciate your patience and we’ll be sure to update you as soon as we have more information.
Thanks and good luck in your future bug hunting.
24th April 2016 Jethro Carr
hi Ubiquiti,
I’ll be disclosing publicly on 29th of April due to no action on this report after two months.
26th April 2016 Ubiquiti Response
Thank you for submitting this issue to us, and we apologize for the delay.
We’re still reviewing this issue and we appreciate your patience. We’ll be sure to update you as soon as we have more information.
Thanks and good luck in your future bug hunting.
The first generation
(Macmini1,1) has a special place as the best bang-for-buck system that I’ve ever purchased.
Purchased for around $1k NZD in 2006, it did a stint as a much more sleep-friendly server back after I started my first job and was living at my parents house. It then went on to become my primary desktop for a couple of years in conjunction with my laptop. And finally it transitioned into a media centre and spent a number of years driving the TV and handling long running downloads. It even survived getting sent over to Sydney and running non-stop in the hot blazing hell inside my apartment there.
My long term relationship on the left and a more recent stray I obtained. Clearly mine takes after it’s owner and hasn’t seen the sun much.
Having now reached it’s 10th birthday, it’s started to show it’s age. Whilst it handles 720p content without an issue, it’s now hit and miss whether 1080p H264 content will work without unacceptable jitter.
It’s previously undergone a few upgrades. I bumped it from the original 512MB RAM to 2GB (the max) years ago and it’s had it’s 60GB hard drive replaced with a more modern 500GB model. But neither of these will help much with the video decoding performance.
Given we had recently obtained something that the people at Samsung consider a “Smart” TV, I decided to replace the Mac Mini with the Plex client running natively on the TV and recycle the Mac Mini into a new role as a small server to potentially replace a much more power hungry
system that performs somewhat basic storage and network tasks.
Unfortunately this isn’t as simple as it sounds. The first gen Intel Mac Minis arrived on the scene just a bit too soon for 64-bit CPUs and so are packing the original Intel Core Solo or Intel Core Duo (1 or 2 cores respectively) which aren’t clocked particularly high and are only 32bit capable.
Whilst GNU/Linux *can* run on this, supported versions of MacOS X certainly can’t. The last MacOS version supported on these devices is Mac OS X 10.6.8 “Snow Leopard” 32-bit and the majority of app developers for MacOS have decided to set their minimum supported platform at 64-bit MacOS X 10.7.5 “Lion” so they can drop the old 32-bit stuff – this includes the popular Chrome browser which now only provides 64-built builds. Basically OS X Snow Leopard is the Win XP of the MacOS world.
Even running 32-bit GNU/Linux can be an exercise in frustration. Some distributions now only ship 64-bit builds and proprietary software vendors don’t always bother releasing 32-bit builds of their apps limiting what you can run on them.
On the plus side, this earlier generation of Apple machines was before Apple decided to start soldering everything together which means not only can you replace the RAM, storage, drives, WiFi card, you can also replace the CPU itself since it’s socketed!
I found a .
Essentially you can replace the CPUs in the Macmini1,1 (2006) or Macmini2,1 (2007) models with any chip compatible with , the highest spec model available being the Intel Core 2 Duo 2.33 Ghz T7600.
At ~$60NZD for the T7600, it was a bit more than I wanted to spend for a decade old CPU. However moving down slightly to the T7400, the second hand price drops to around ~$20NZD per CPU with international shipping included. And at 2.177Ghz it’s no slouch, especially when compared to the original 1.5Ghz single core CPU.
It took a while to get here, after the first seller never delivered the item and refunded me when asked about it. One of my CPUs also arrived with a bent pin, so there was some rather cold sweat moments straightening the tiny pin with a screw driver. But I guess this is what you get for buying decade old CPUs from a mysterious internet trader.
I was surprised at the lack of dust inside the unit given it’s long life, even the fan duct was remarkably dust-free.
The replacement is a bit of a pain, you have to strip the Mac Mini right down and take the motherboard out, but it’s not the hardest upgrade I’ve ever had to do – dealing with cheap $100 cut-your-hand-open PC cases were much nastier than the well designed internals of the Mac. The only real tricky bit is the addition and removal of the heatsink which worked best with a second person helping remove the plastic pegs.
I did it using a regular putty knife, needle-nose pliers, phillips & flat head screw drivers and one Torx screw driver to deal with a single T10 screw that differs from the rest of the ones in the unit.
Recommend testing this things *before* putting the main case back together, they’re a pain to open back up if it doesn’t work first run.
The end result is an upgrade from a 1.5 Ghz single core 32bit CPU to 2.17 Ghz dual core 64bit CPU – whilst it won’t hold much to a modern i7, it will certainly be able to crunch video and server tasks quite happily.
The next problem was getting an OS on there.
This CPU upgrade opens up new options for MacOS fans, if you hack the installer a bit you can get MacOS X 10.7.5 “Lion” on there which gives you a 64-bit OS that can still run much of the current software that’s available. You can’t go past Lion however, since the support for the Intel GMA 950 GPU was dropped in later versions of MacOS.
Given I want them to run as servers, GNU/Linux is the only logical choice. The only issue was booting it… it seems they don’t support booting from USB flash drives.
These Mac Minis really did fall into a generational gap. Modern enough to have EFI and no legacy ports, yet old enough to be 32-bit and lack support for booting from USB. I wasn’t even sure if I would even be able to boot 64-bit Linux with a 32-bit EFI…
Given it doesn’t boot from USB and I didn’t have any firewire devices lying around to try booting from, I fell back to the joys of optical media. This was harder than it sounds given I don’t have any media and barely any working drives, but my colleague thankfully dug up a couple old CD-R for me.
“Daddy are those shiny things floppy disks?”
I also quickly remembered why we all moved on from optical media. My first burn appeared to succeed but crashed trying to load the bootloader. And then refused to eject. Actually, it’s still refusing to eject, so there’s a Debian 8 installer that might just be stuck in there until it’s dying days… The other unit’s optical drive didn’t even work at all, so I couldn’t even do the pain of swapping around hardware to get a working combination.
Having exhausted the optional of a old-school CD-based GNU/Linux install, I started digging into ways to boot from another partition on the machine’s hard drive and found .
This awesome software is an alternative boot manager for EFI. It differs from a boot loader slightly, in a traditional BIOS -& Boot Loader -& OS world, rEFInd is equivalent to a custom BIOS offering better boot functionality than the OEM vendor.
It works by installing itself into a small FAT partition that lives on the hard disk – it’s probably the easiest low-level tool I’ve ever installed – download, unzip, and run the installer from either MacOS or Linux.
Disturbingly easy from the existing OS X installation
Once installed, rEFInd kicks in at boot and offers the ability to boot from USB flash drives, in addition to the hard drive itself!
The USB flash installer has been detected as “Legacy OS from whole disk volume”.
Yusss, Debian installer booted from USB via rEFInd!
A typical Debian installation followed, only thing I was careful about was not to delete the 209.7MB FAT filesystem used by EFI – I figured I didn’t want to find out what deleting that would mean on a box that was hard enough to boot as it is…
The small & 1MB free space between the partitions here irks me so much, I blame MacOS for aligning the partitions weirdly.
Once installed. rEFInd detected Linux as the OS installed on the hard drive and booted into GRUB and from there the usual Linux boot process works fine.
Launch the penguins!
Final result -2GB RAM, 64bit CPU, delicious delicious GNU/Linux x86_64
I can confirm that both 32bit and 64bit Debian works nicely on this box (I installed 32-bit first by mistake) – so even without doing the CPU upgrade, if you want to get a bit more life out of these early unsupported Mac Minis, they’d happily run a 32-bit Debian desktop so you can enjoy wonders like a properly patched browser and operating system.
Not all other distributions will work – Ubuntu for example don’t include EFI support on their 32-bit installer which will probably cause some headaches. You should be OK with any of the major 64-bit distributions since they tend to always support EFI.
The final joy I ran into is that when I set up the Mac Mini as a headless box, it didn’t boot… it just turned on and never appeared on the network.
Seems that the Mac Minis (even the later unibody generation) have some genius firmware that disables the GPU hardware if no screen is attached, which then messes up most operating systems on it.
The easy fix, is to hack together a fake VGA load by connecting a 100Ω resister between pins 2 and 7 of a DVI-to-VGA adaptor (such as the one that ships with the Mac Mini).
I need to make a tidier/better version of this, but it works!
No idea what engineer thought this was a good feature, but thankfully it’s an easy and cheap fix, especially since I have a box littered with these now-useless adaptors.
The end result is that I now have 2x 64-bit first gen Mac Minis running Debian GNU/Linux for a cost of around $20NZD and some time dismantling/reassembling them.
I’d recommend these small Mac Minis for server purposes, but the NZ second hand prices are still a bit too expensive for their age to buy specifically for this… Once they start going below $100 they’d make reasonable alternatives to something like the Intel NUC or Raspberry Pi for small serving tasks.
The older units aren’t necessarily problem free either. Whilst the build quality is excellent, after 10 years things don’t always work right. Both of my optical drives no longer function properly and one of the Mac Minis has a faulty RAM slot, limiting it to 1GB instead of the usual 2GB.
And of course at 10 years whom knows how much longer they’ll run for – but it’s been a good run so far, so here’s to another 10 years (hopefully)! The real limiting factor is going to be the 1GB/2GB RAM long term.
Posted in ,
This week I presented at the around the tooling we have setup at Fairfax for running micro services for Node.js apps.
Essentially we have a workflow that uses
for CI/CD and
for deployment. Our apps follow the principals of the
making each service simple and consistent to deploy.
This talk covers the reasons for this particular approach, the technologies used and offers a look at our stack including infrastructure and the deployment pipeline.
Whilst this talk is Node.js specific, we use the same technology for both Node.js and Java microservices and will shortly be standardising our Ruby applications on this approach as well.
Posted in ,
, , , , , , ,
Earlier this month I was invited to speak at the AWS Wellington User Group around how we’ve been handling cost control at Fairfax including our use of spot pricing. I’ve now processed the video and got a recording up online for anyone interested in watching.
The video isn’t great since we took it in dim light using a cellphone and a webcam in a red lit bar, but the audio came through pretty good.
, , , , , ,
Lately a couple people have asked me about how much swap space is “right” for their servers – especially in the context of running low spec machines like AWS t2.nano/t2.micro or
boxes with low allocations like 1GB or 512MB RAM.
The old fashioned advice was always “your swap space should be double your RAM” but this doesn’t actually make a lot of sense any more. Really swap should be considered a tool of last resort – a hack even – to squeeze a bit more performance out of systems and should be used sparingly where it makes sense.
I tend to look after two different types of systems:
Small systems running a specific dedicated service (eg microservices). These systems might do nothing more than run Nginx/Apache or something like PHP-FPM or Unicorn with a few workers. They typically have 512MB-1GB of RAM.
Big heavy servers running heavy weight applications, typically Java. These systems will be configured with large memory allocations (eg 16GB) and be configured to allocate a specific amount of memory to the application (eg 10GB Java Heap) and to keep the rest free for disk cache and background apps.
The latter doesn’t need swap. There’s no time I would ever want my massive apps getting pushed into swap for a couple reasons:
Performance of these systems is critical. We’ve paid good money to allocate them specific amounts of memory which is essentially guaranteed – we know how much the heap needs, how much disk cache we need and how much to allocate to the background apps.
If something does go wrong and starts consuming too much RAM, rather than having performance degrade as the server tries to swap, I want it to die – and die fast. If Puppet has decided it wants 7 GB of RAM, I want the OOM to step in and slaughter it. If I have swap, I risk everything on the server being slowed down as it moves tasks into the horribly slow (even on SSD) swap space.
If you’re paying for 16GB of RAM, why do you want to try and get an extra 512MB out of some swap space? It’s false economy.
For this reason, our big boxes are all swapless. But what about the former example, the small microservice type boxes, or your small personal VPS type systems?
Like many things in IT, “it depends”.
If you’re running stateless clusters, provided that the peak usage fits within the memory allocation, you don’t need swap. In this scenario, your workload is sized appropriately and if anything goes wrong due to an unexpected issue, the machine will either kill the errant process or die and get removed from the pool entirely.
I run a lot of web app workers this way – for example a 1GB t2.micro can happily run 4 Ruby Unicorn workers averaging around 128MB each, plus have space for Puppet, monitoring and delayed jobs. If something goes astray, the process gets killed and the usual automated recovery processes handle things.
However you may need some swap if you’re running stateful systems (pets) where it’s better for them to go slow than to die entirely, or if you’re running a system where the peak usage won’t fit within the memory allocation due to tight budget constraints.
For an example of tight budget constraints – I run this blog on a small machine with only 512MB RAM. With an allocation this small, there’s just not enough memory to run applications like Apache and also be able to handle the needs of background daemons and Puppet runs which can use several hundred MB just by themselves.
The approach I took was to create a small swap volume and size the worker counts in Apache so that the max workers at average size would just fit within the real memory allocation. However any background or system tasks, would have to fight over the swap space.
What you can see from the above is that I’m consuming quite a bit of swap – but my disk I/O is basically nothing. That’s because most of what’s in swap on this machine isn’t needed regularly and the active workload, i.e. the apps actually using/freeing RAM constantly, fit within the available amount of real memory.
In this case, using swap allows me to get better value for money, than using the next size up machine – I’m paying just enough to run Apache and squeezing in the management tools and background jobs onto the otherwise underutilized SSD storage. This means I can spend $5 to run this blog, vs $10. Excellent!
In respects to sizing, I’m running with a 1GB swap on a 512MB RAM server which is compliant with the traditional “twice your RAM” approach to sizing. That being said, I wouldn’t extend past this, even if the system had more RAM (eg 2GB) you should only ever use swap as a hack to squeeze a bit more out of a system. Basically don’t assume swap will scale linearly as memory scales.
Given I’m running on various cloud/VPS environments, I don’t have a traditional swap partition – instead I create an image file on the root filesystem and format it as swap space – I use a third party Puppet module () to do this:
swap_file::files { 'default':
=& present,
=& '/tmp/swapfile',
swapfilesize
=& '1000 MB’
The performance impact of using a swap file ontop of a filesystem is almost nothing and this dramatically simplifies management and allocation of swap space. Just
or you’ll find that memory benefit isn’t as good as it seems.
Posted in ,
After a recent reboot of my CentOS servers, I’ve inherited an issue where the server comes up running with /tmp mounted using tmpfs. tmpfs is a memory-based volatile filesystem and has some uses for people, but others like myself may have servers with very little free RAM and plenty of disk and prefer the traditional mounted FS volume.
As a service it should be possible to disable this as per the above comment… except that it already is – the following shows the service disabled on both my server and also by default by the OS vendor:
The fact I can’t disable it, appears to be a bug. The RPM changelog references
and implies it’s fixed, but the ticket seems to still be open, so more work may be required… it looks like any service defining “PrivateTmp=true” triggers it (such as ntp, httpd and others).
Whilst the developers figure out how to fix this properly, the only sure way I found to resolve the issue is to mask the tmp.mount unit with:
systemctl mask tmp.mount
Here’s something to chuck into your Puppet manifests that does the trick for you:
exec { 'fix_tmpfs_systemd':
path =& ['/bin', '/usr/bin'],
command =& 'systemctl mask tmp.mount',
unless =& 'ls -l /etc/systemd/system/tmp.mount 2&&1 | grep -q "/dev/null"'
This properly survives reboots and is supposed to survive systemd upgrades.
Posted in ,
So 2016 marks 10 years since my first very linux.conf.au in 2006. That’s so long ago it even predates the start of this blog!
Sadly I’m not heading to LCA 2016 in Geelong this year. A number of reasons for this, but I think in general I just needed a break this year. I want LCA to be something I’m really excited about again rather than it feeling like a yearly habit so a year away is probably going to help.
This isn’t a criticism of the 2016 team, it looks like they’re putting together an excellent conference and it should be very enjoyable for everyone attending this year. It’s going to suck not seeing a bunch of friends whom I normally see, but hope to see everyone at 2017 in Hobart.
Posted in ,
Subscribe / Chat / Follow
Subscribe via Email
Because you're too cool for RSS.
Email Address
My Software & Projects
or install handy .
Say Thanks
Found anything on my blog or any of my applications useful? Donations towards coffee fund always welcome.
Recent Posts
Categories}

我要回帖

更多关于 coretime 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信