Houston Linux Users Group

Our meeting location has changed to 2550 North Loop West. (Map Link) (Street View Link)

Workshops are held Every Wednesday from 6pm to 9pm in cPanel's offices at 2550 North Loop West. Experienced Linux users and administrators will be on hand to assist members with Linux installation, configuring, setup, networking, and basic training. In other words people just come and hang out.

Happy hour is held every Wednesday night just after at 9 p.m. See the happy hour page for details.

HLUG members can often be found on irc.freenode.net in the #hlug channel.

April 14, 2017

Pete Jamison


Software as a service, Platform as a service, Infrastructure as a service… might as well list Accounting as a service, HR as a service, Security as a service, Programming & Development as a service, Service as a service (actually that last isn’t a joke, as Customer Support has been offshored for many organizations now)… can every business process be outsourced or commoditized? I don’t know but I know that I haven’t seen much new – or old – in the service economy that compelled me to conclude that the benefits outweighed the hazards of paying outside sources for everything. But there may be a category now that makes more sense than many: Disaster Recovery as a Service.

Cloud providers can give you a rented virtual (and often the additional choice of rented physical) infrastructure. You can rent online storage for backups. You can dispense with information technology staffing by hiring server admin companies on an as-needed basis (I don’t recommend that but it’s a widespread practice). You can hire security experts as needed, hopefully before they’re needed. But what if there was a way to combine solutions for many of these needs into one strategy? As I understand it, Disaster Recovery as a Service uses a massive backup as a fix for not just hardware failure or power loss, but for many things this side of ddos mitigation (or other problems outside your network). One particular company with a booth I saw at a recent security conference appeared to work like Carbonite backup, except for your whole network inventory. All nodes (desktops, laptops, workstations, servers) plus all network devices like routers and switches are “cloned” at the provider’s data center. This idea stores not just data backups but whole operating system builds, with users, desktop environments, configurations, applications and stray documents. And in the network devices, all the addresses, routings, rules, configs and system versions are included. These are (in this particular company’s instance) not just snapshots restricted to one point in time, but constantly updated “clones” of All The Things.

A “Carbonite for your whole shop” approach would, as described here, seem to rely on lots of bandwidth to do all the constant updating (plus think cost for that in addition to network capability matters). Also, the securing of the connection to the provider location or cloud introduces an issue. The defense stance of the provider data center itself would constitute another. After all, grabbing backups has already proven profitable to criminals worldwide, who didn’t have to frontally attack a victim’s main arrangements. But there’s also promise in the approach. When your hardware and connectivity become ready after an event, you could come back from fire, flood, tornado, hurricane, earthquake, physical strikes like an eighteen-wheeler smashing through your office wall, node breakages induced by Windows Update, System Update or the like, worm damage, maliciously encrypted data blackmail and most problems related to geographical location (sinkholes?).

I can see potential challenges here with cost, security and network capacity. But if these can be handled then DRaaS is a new approach that could overtake legacy methods of the old offsite backup, reliable as it is. I’ll be watching the market for this product class with great interest.

by Pete (noreply@blogger.com) at April 14, 2017 06:15 AM

January 01, 2017

Pete Jamison

Weird Scenes Inside The Gold Mine

... or "Potential Hazards of the System Administration Business" and an appreciation of a classic Unix book (apologies to Jim Morrison and The Doors for ripping off their song title)

One of my prized posessions is my 1999 copy of Kirk Waingrow's "Unix Hints And Hacks". Sure, it's out of date, but as with many things in Unix/Linux, a lot hasn't changed. It inspires me due to its long-range perspective even today. This post is specifically inspired by Chapter 10: System Administration - The Occupation. That's the sort of real-world experience chapter most Unix books lack. The following is solely my own perspective and Waingrow is not responsible for any error!

All jobs are not created equal. Choose wisely as you accept Support Engineer, Technical Support, Developer, Support Technician or any such job. Any of them can go south due to personal situations, market conditions or company strategy, but I think I've identified a few types of issues that are common in this field. Watch out for...


Use a special email address for job offers. These days, the tendency of businesses to push much HR activity to the web, and the possible tendency of yourself to browse listings rather than contact companies directly will lead to an ornery amount of job search spam. When the listings responses start to share your address (many HR outfits have multiple sites and names, or have been purchased by other outfits) it can get out of control, so make sure the search doesn't gum up the works in your main or personal email account. Listing sites and headhunters haven't performed nearly as well for me as a grapevine network or looking at a company's job listings on their own company site.


Use third party research or your grapevine to try to find out what's happening inside a company before you interview. There might have just been some turnover that could not only result in the loss of corporate knowledge but could have confused relationships between job functions and between departments. Like "Responsibility Creep" examined below, you may be seen as fresh meat and as the repository of work offloaded by others. This is possibly the hardest thing to discover about a potential employer. Glassdoor.com and other sites may be useful but that's not a sure thing, since both disgruntled ex-employees and current managers can post comments. Same with Reddit and so many other places. Try any local business journal newspapers/sites for stories about the company or industry.


(Note: the next predicament is couched in sneaky, dramatic possibilities but could also result from entirely appropriate reactions to business incidents.)

There exists the possibility of your joining a company at which a legal issue is already in progress. This is not common but considering the state of computer security today, it could increase in frequency. Let's say John Juggler is on the board of a company that relies on internet presence to do business. The company's 24/7 server exposure and intentional lack of a full-time administrator (to save money on employee costs) has resulted in the data loss of 90,000 complete customer records. We all know that this never happens but (in my invented example here) it did and since there could be legal repercussions, not only have they hired an admin but a Computer Information Security Officer was shopped for. You got the job. There is indeed a legitimate need for such a person. There is also a need for Juggler's organization to have a fall guy in case anything requires it. In the board room, sanitized language is used by Juggler and others, as they're looking at the likelihood of future rules in the coming year imposing a 90-day maximum to apply a mandatory data breach notification procedure. The cleanup could go longer than that. Juggler also makes a note to talk to the lawyer about how one might legally prevent a CISO from performing the federally mandated notification for as long as needed to circle the wagons. Good times! A new CISO would benefit by doing a security audit, as soon as he/she had the authority, noting time/date stamps and logging info on and after the CISO's hire date. That way, one could indicate to the Fed (if they ask) when one became aware of the situation. And that new CISO should send out best practices emails to all users immediately upon taking the reins of responsibility, and regularly after that. It is also incumbent upon the new CISO to assess how serious upper management truly is about getting on board with enforcement of new security procedures, which could create friction with those who would protest as to complexity and ease-of-use. Security improvements need to be implemented from the top down. They must come from the board room.


I've seen this happen with remarkable speed. A bunch of people I didn't know in a company with which I'd just started began calling me for everything. When given a task, they instinctively sprung into action by phoning the new guy to do it. Some duties were unrelated to my job description, but I didn't respond with the idiotic "That's not my job" retort. I countered by researching the organization chart, finding out who answered to whom, and got bureaucratic. I indicated that I could get involved if their boss, Person X, talked to my supervisor, Person Y. See, it's my supervisor's (and possibly a client's) choice as to what I do daily. Not anyone else's. So properly one goes through channels in order to change that to-do list. Things began to cool off mysteriously after that point. ["It ain't what they call you; it's what you answer to." - W.C. Fields] Another variant of this can happen if deliberately intended by the company to economize on people: for years I've seen job announcements that almost went on forever, detailing activities that described what it might take two to three people to do. An announcement for Tech Support? Check to see if it involves additional desktop support, on-call availability, website help, DBA work, network management and the ever-present "other duties". This predicament may even be forced onto unsuspecting managers and supervisors if someone above controls the funding for hires, and begins allowing one hire for every two attritioning out. I saw that one happen for years; it wasn't middle management's doing and I suspect was completely contrary to middle management's well-being, wishes and needs.


I don't mean to discourage you (variants of the above situations could happen in many industries), but only to help warn against any rude awakening that might be in store. Talking with customers of the organization you're considering can also reveal helpful information not available anywhere else. Good luck!

by Pete (noreply@blogger.com) at January 01, 2017 12:51 AM

December 21, 2016

Pete Jamison

What Happens When You Ping In Solaris??

You may have been there: you try pinging in Solaris (in my VM running Oracle Solaris 11 specifically) and you find that it's necessary to add a flag, like this:

ping -s (hostname or IP) 5

The flag, I think, means "send packets" and you can specify the number of packets after which to stop at "5". If you don't it pings away until Control-C is invoked.

But what crazy thing happens to get you to that point?

If you try

ping (hostname or IP)

... what happens?

Here's something that hopefully wins you a beer somewhere. The response is this: "ping" does not fail, but returns a simple confirmation. It does not return individual packets at default; that's what the flag is for. It responds to...

ping (hostname or IP)

like so:

(hostname or IP) is alive

You can thank me later.

by Pete (noreply@blogger.com) at December 21, 2016 04:24 AM

December 19, 2016

Pete Jamison

Layoffs in IT? Not so fast.

I don't know what it is, but I know what it isn't.

For some time there have been news stories such as this one that state that the big layoffs that the infotech world is now seeing would be because of the "move to the cloud" (the customer adopting a rental rather than an ownership model of IT assets). Baloney. Such stories accurately convey what they know of numbers of workers no longer employed, plus estimates of what's to come, and then proceed to prognosticate about the "why" of it all.

If they can, so can I.

I have some experience at a cloud provider which rented not only virtual but physical or co-located assets. Referring to that experience and totally off the cuff, I'll illustrate three categories of asset that give the customer any IT capability:

material assets
structural assets
maintenance assets

The material assets would be the physical hardware. The structural asset is whoever is architecting the nodes and networks together (which determines what to buy or rent). The maintenance assets would be those responsible for operation. The only one of these asset categories affected by "the cloud" (the question of renting or owning) is the first one, the material. We got lots of calls from customers that wanted help on the other two (either operating servers or re-architecting networks) but since we did not sell service, there was little we could do, although we did WHAT little we could for them (restarts, minor investigation). This is my point: they still badly needed help with the second and third asset categories. Some customers tried to get by with no administrator, simply calling a server maintenance company when the box was down and judging a full or part-time administrator to be too expensive. Sometimes we would be surprised to speak with an actual professional administrator and employee to the customer. But you see the situation: whether renting or owning the gear, a business must still get someone to set up the way the computers interact with the world and each other, and must still get someone to perform day-to-day operations; servers don't run themselves any more than cars decide where to drive.

That's why I said "Baloney" at the beginning of this post. "The cloud" is simply an ownership distinction. The only way a cloud situation cutback could affect human administrators would be in a Managed Hosting environment, where you rent not only stuff but people (a wildly expensive prospect). Most servers are rented in the Unmanaged manner (bring your own expertise).

So if the layoffs can't be explained by the change to a rental model, what's up? My guess (I made good grades in Economics) would be that it's the general level of economic activity in a particular country. This seems to be going on worldwide. Although the Information Systems Security Association says that general labor needs between now and 2020 will grow at least 7%, and that IT Security needs will double or triple that in the same period, we still in the short run see cutbacks. I think both observations are true. We do (pretty much) know that for the two Obama presidential terms, GDP has grown less than 3%, which is poor. I say that IT health is directly related to general business activity, since IT is only a delivery device for business activity. If companies are merging, selling out or otherwise closing, there is less immediate need for us, the IT employees. But that changes as business climate improves or degrades.

I don't have any clever math to prove this. Only the logic that since "the cloud" does not give an organization the ability to fire the IT staff, you have to look elsewhere for the layoff impetus. Maybe employee cost goes up because of insurance price hikes. Maybe a competitor is trouncing the company's main product. Maybe the directors of the outfit intended to sell out at this time anyway. Is there a lawsuit from the owners of a similar design? Whatever.

If the general business environment improves, whopee. We get more data so some Eco major can do a thesis. We'll see. But no, rented servers are not the 'mechanical man' supplanting real people.

by Pete (noreply@blogger.com) at December 19, 2016 09:27 PM

December 06, 2016

Pete Jamison

You Learn Something Every Day

...Every day you look, that is. Today I was doing another Udemy video course and the video was showing the /sbin directory on a popular Linux distro. The usual suspects were there but I noticed, in the several versions of fsck, among stalwarts like fsck.ext2, fsck.ext3, fsck.nfs and so on, one for Minix, of all things, which is indeed still around. But I noticed something even more odd, that I wasn't sure was a mistake, or a smear on my screen: fsck.cramfs.

CramFS?! Is there a filesystem by that name? Surely that's some Intercal-type April Fool joke.

It's no joke. And don't call me Shirley.

The Compressed ROM File System is, like the name suggests, designed to take up very little space. It's often used for embedded systems (the whole thing) but also as an assist for bigger systems. It's really little. A file has to be smaller than 16MB and an entire file system is restricted to about 270MB. Although it's rather old and other schemes have taken over its uses in many Linux distributions, Debian continues to use it for initrd images and I understand that one or another version of SUSE uses it for install images. To maximize smallness (or perhaps stability) it's read-only, although it ships helpfully with some utilities like mkcramfs; that and others are contained in the Util-linux package.

Due to cramfs being obsolete by some arguments, many systems now use the also-weirdly-named squashfs. The tinyness theme continues. Ubuntu, Mint, Debian, Arch and many others currently use it for LiveCDs. Utilities for squashfs are available as well, and mksquashfs and unsquashfs have been ported up to Windows 8.1 at last notice.

And on top of everything else, both cramfs and squashfs, while they exist in a compressed form, do NOT, repeat, do NOT require uncompression to operate. Wow.

So if some admin tries to blindside you with insane Linux terms like these which in an alternate universe would probably not exist, thank me later. I would have lost a bet at the bar after a meeting due to tools named this way.

by Pete (noreply@blogger.com) at December 06, 2016 03:36 AM

October 19, 2016

Pete Jamison

But first...

Before returning you to our regularly scheduled programming (Linux issues and security complexities), I'd like to do yet another list aimed at what professionals or security advocates seem to need to do on a regular basis with respect to the other 999 out of a thousand people. Imagine, if you will, a person of general knowledge who just got worried about computer security. This person asks you "What should I do now?". I'll give two lists as answers; the first will be for general users and the second will be for small business situations.


1. BACK UP ALL DATA AND APPLICATIONS- Having only one copy of something is risky. Establish a routine and specific location for copying all data on all computers to a storage device or location. An automatic cloud arrangement is better than nothing but that cloud (a cluster or network or rented storage device) is not owned or principally controlled by you. Consider ejecting hard drives or thumb drives that can be placed in a fire protection safe, either on premises or somewhere else.

2. DOCUMENT ALL CONFIGURATIONS AND SETTINGS - All the operating systems and applications that your computers run are probably not at default settings anymore. In documenting all settings, you as well will be inventorying all the applications you use. Network and internet provider information, including contact phone numbers for tech support, would be needed as well.

3. CHANGE ALL PASSWORDS - How many months has it been since you changed pw's on some things - or anything? And never EVER stick with a default password; these are known by the bad guys.

4. UPDATE ALL VERSIONS OF SOFTWARE AND APPLICATIONS - Free security updates are often available for antivirus, operating systems, apps and other aspects of what you do. In the rare case of an update breaking something, backups will be what saves you.



1. BACK UP ALL DATA AND APPLICATIONS - Separate CD/DVD copies of all apps and OS's would be good, but saves (dd or other bit-for-bit) should do. Best would be to have a copy of the backup close at hand and another offsite.

2. DOCUMENT ALL CONFIGURATIONS AND SETTINGS - Look into inventory programs for this purpose, or having your staff person write a script. Scripts may already exist that can be customized or chopped down for your situation.

3. CHANGE ALL PASSWORDS - Enable password aging and imposition of use of complexity, as with mixes of capital and lowercase letters, special characters, disallowing dictionary words and password reuse and so on.

4. UPDATE ALL VERSIONS OF SOFTWARE AND APPLICATIONS - Here's where backups are important if any update breaks something. Multiple versions of some language like PHP might be needed for some web page situations, so updating could be tricky; check with your IT person or contract webmaster. The latest PHP could be needed for one thing, while a specific earlier version could be needed for another.

by Pete (noreply@blogger.com) at October 19, 2016 03:27 PM

September 23, 2016

Pete Jamison

By the way... latest desktop rebuild story

I was given a Windows 10 desktop due to an associate having given up and gotten a new machine after 2 or 3 years. I, king of the museum pieces, had not seen such a recent desktop so was happy to get a chance to check out what was new. This all-in-one 26-inch touchscreen-with-wireless keyboard and mouse resembles to me more of a huge phone than a computer, but I'm not looking a gift horse in the mouth.

Until seeing the user-created files and such still hanging around. So it was time to bring the OS back from the recovery partition (staying with Windows for the moment), at least to see how that would fare. In the future, other OSs like Linux could come in via one of the only two USB ports provided (no ethernet or DVD burner included) but I stayed with what was on board as an experiment.

Lenovo's recovery app behaved as advertised. BUT as will be the case often, it won't go back to Windows 10 if it wasn't already there. This "box" began life at Windows 8. So it replaces that system, which wants about 225 updates in two or three hunks to get up to speed. Now, in about two days, the computer discovered that it could be Windows 8.1, so that was another 137 updates that it needed. Then another 10 security updates after that. Then another nine optionals out of ten (I elected not to put Skype onto this computer).

Then there were the little things. Put an adhesive cover over the camera lens. Grab a few utilities I favor (like SpyBot Anti-Beacon, Spyware Blaster and Mrs. Clinton's IT company's favorite, BleachBit (available for Linux, Windows and on command line for OSX). Uninstall lots of start menu crud. Go over it with screen cleaner. Oh, and if you do something similar like this, give things about a week to settle in. Some security updates won't go in until others do, so let's not assume Redmond got everything in the right order.

by Pete (noreply@blogger.com) at September 23, 2016 03:05 AM

August 23, 2016

Pete Jamison

No Joke

The future of spamming and other sorts of online skulduggery looks so bright, we'll have to wear shades. The gag observation about toasters having a need for IP addresses (network access) is already losing punch. Over the last few years, we've seen quite a progression. First, remember when this sort of thing began showing up?

Several cable providers have offered over the last few years security systems operated via smartphone or tablet app (and I assume a browser version optimized for a desktop/laptop situation, I don't know). Then shortly thereafter, curious experimenters found holes in the application code, vulnerabilities in the wireless capability as well as the electromechanical controllers addressed by the application and so forth.

Then a year or so later, along comes...

Here's a refrigerator that has cameras inside so you can check current stocks of food (it temporarily turns on the light in there so you can see) from your phone while you're at the grocery store. The door also has a message center, allowing the phones of everybody in the family to send to the screen and leave notes "on the door" about where they are, who needs to be doing what when, etc. I haven't heard about anybody breaking into the embedded processor serving up all these pretty graphics yet - but I haven't had time to find out, since...

Only weeks later, here we have:

This is a new kitchen oven's application that, through your wireless router, has the oven send info to your phone so that you can control the temperature setting and cooking time from anywhere your phone can get a signal. Assuming of course that you're not two floors down in the elevator of an underground parking garage with no repeater antenna in the elevator car.

NOW jump ahead to less than five days ago: I was talking to a security consultant who, I noticed after a minute, had a small bandage on her arm. I asked about the injury - but it was NOT an injury. It was a rejection problem with her implant. An implant for a medical reason? No. It was for holding information that could be scanned... like a business card. She then mentioned the possibilities involving having an implant that could do short-range scans of network activity, or simply be satisfied with passive collection...

I already use a wallet with shielding in it so as to interfere with unwanted (surreptitious) scanning of my credit card information from hidden devices (my buddy got one of his card numbers swiped at a Renaissance Festival this way recently; he did not use a shielded wallet at that time). But I digress.

The point of this musing is that the Internet Of Things is, or soon will be, bigger than any human-operated network. And that this 'thing' network is being peopled by embedded, unmanaged or lightly managed printers, appliances, phones, automotive controllers, software and who knows what else - that might have access to flash storage or even a drive somewhere. Storage + network access sounds like a nice spamming or 'bot command-and-control outpost to me. And the growth of this lightly managed ecosystem is fueled by the convenience demands of people who, to put it mildly, are not network security engineers. Nor would their kids be network security engineers, either, to whom they've given droids and iPhones as toys. From this point it looks easy (since with IPv6 everything can have an address) for lots of chaff activity to overwhelm the network maintenance people, whose staffs are minimized for cost control reasons as it is. And chaff activity is now threatening to make Email unusable, in the same way that usenet was killed by spam posts. But this time it's not just a single application or feature that could be killed - but the whole communications system that could be clogged.

As soon as I've come up with a quick, easy and cheap solution to all this, I'll post again. Gimme a few days. [EDIT: well, whaddaya know. I'd considered joking in this article about wearable Faraday cages that we might need soon, but checking my mail, I'd gotten an ad for this yesterday: The Scott eVest clothing line, now with PAN, or Personal Area Network!. It's already effing here.]

Update of the updated update: According to the Information Systems Security Association (in a recent IoT webinar soon to be documented on YouTube), the IoT numbered around 14 million devices two years ago, and by some projections will hit 50 million by 2020. Several issues on the webinar's agenda prompt me to wonder how one would control what information was being exfiltrated through, say, a home router, and whether or not you'd want that in a hyperconnected future world. Putting aside the worry that every amateur network admin would instantly be committing felonies by running afoul of export restrictions when the cheap electric toothbrush or pacemaker phoned home to China, I speculate that control of data flow and connection requests could be automated with some program resembling antivirus with firewall, operating with an agent at the router. Of course, users have been known in periods of difficulty to disable A/V, firewalls and all else (or to elect not to install such applications at the outset), so perhaps a bit more attention is needed here...

by Pete (noreply@blogger.com) at August 23, 2016 06:25 PM

August 22, 2016

Pete Jamison

Givin' Mr. Gregg Some Love

Around six months ago a user came into one of the meetings needing assistance setting up penetration testing capability. Briefly it involved setting up some target virtual machines in a virtually networked arrangement, and as I understood it, one could simply include a VM of Kali linux or Fedora Security Lab or some such testing distribution to get a nice tool set onboard with which to look at the other "boxes", and that would complete the arrangement. The virtualization scheme (a closed-source product) had become a bit problematic in a few ways and thoughts on that predicament have stayed with me. Looking back, I see three approaches that would profitably address such a set of issues:

1. Use the free tech support included with the virtualization product to address the specific networking issues we looked at in the meeting. We're an open source club and although it's possible that we have members familiar with almost any technology, the product was from a company with which our experience was limited. Unless I and others were missing something, it seems to me that a virtualization product (Virtual Box, VM Ware's latest chopdown version, Parallels, Hyper-V, whatever) should provide a basic network out of the box. Just add an assortment of OS's plus one pentesting specialist linux distro and you're done.

2. Guaranteed: most people will hate this suggestion with a murderous passion, but if it were me, I'd eliminate the virtual layer and simply get a switch and three or four separate desktops and/or laptops. For real; no VMs. I'd hook up a Windows box, attach a Mac, add one with OpenSUSE or any general purpose distro in the top 20 over at Distrowatch (right column under the ad block for latest rankings), and the last box would run Kali. Along with a nice Network+ class over at udemy.com, this would be what I'd think would teach network penetration best. After all, building a network is what you'd need to do by yourself first, in order to understand what you propose to penetrate. To keep the electricity bill down, I wouldn't run everything at the same time. I also benefit from not currently being married, so that would get me around the exclamatory reactions surrounding all the wiring and equipment laying everywhere.

3. I only wish, when the user's question originally arose, that I'd had a particular book to hold up in my non-virtual hand: Michael Gregg's "The Network Security Test Lab: A Step By Step Guide". It goes from the physical arrangement (either real or virtual) in the vein mentioned above to dozens of additional topics that flow naturally from the reasons one builds such a capability in the first place. He understands that pen testing is not an isolated capability. One needs an initial and intimate familiarity with the real estate that one proposes to explore.

This review is a preliminary one; it will provoke further articles from me. The book constitutes a syllabus for a class that universities should be giving if they aren't already. Of eleven chapters, 'constructing the lab' is confined to Chapter One. Further chapters address the whys and hows of using what Chapter One allows you to assemble. Passive Information Gathering is given its own chapter, discussing methods from banner grabbing to dumpster diving, in order to drive home the point that all information is not on the network - or perhaps ANY network, and the point that the more noninvasive the surveillance, the better. At least at first. Results from passive gathering will tell a tester how exposed the client is, which will make important reading in the results report. Properly, network traffic analysis and system identification are given separate chapters and the analysis chapter comes first, as the questions will occur on the way in. In the traffic chapter (chapter 3) you'll find Gregg is big on Wireshark as a main tool, wireless or wired as the problem may be. He quickly proceeds from packet basics to real-world examples involving tricks like VLAN hopping and different types of LAN taps. The System Detection and Analysis chapter (4) starts with a hex refresher and proceeds to discussions of services that different OS's tend to implement differently (this is the way nmap and other tools try to ID boxes on the fly). A basic example is the TTL: Linux time-to-live for packets is 68, Windows' is 128 and many hardware devices put it out to 255 (unrestricted). But later, System Enumeration is given its own separate chapter (5). There are further ones for Encryption and Tunneling (6), Automated Tools (7), security problems peculiar to wireless networking (8), malware (9), Intrusion Detection and more malware analysis from a post-intrusion perspective (10) and Forensics (11).

Some high points of those last chapters include discussions of password tools like PassTheHash (yuk yuk) and crackers that take slang like Klingon into account (chapter 6), assessment tool scriptability as with Nessus (NASL), metasploit (ruby or perl), nmap (lua/NSE) and point/click tools like BeEF and Core Impact (chapter 7), and resources on how various entities like retailers look for your phone in order to track you within their stores (chapter 8). I should note that Chapter 9 and 10 overlap; 9 is the introductory level of Malware discussion (using tools like Rootkit Hunter, virustotal.com, etc.) and 10 gets into the heavier stuff like IDS tuning (he prefers Snort since their maintenance site has many preconfigured rules and signatures to get you started).

My compliments to the chef. This is a well-written and current resource that I'll be turning to again and again (I require much repetition to learn and Gregg repeats what I need repeated). He also writes for the Huffington Post website; check him out there.

by Pete (noreply@blogger.com) at August 22, 2016 06:15 AM

August 03, 2016

Pete Jamison

But This Can't Really Happen... much?

Here's the recent story on that deletion mistake, which was a marketing ploy by somebody (HOAX), but it resembled actual incidents I recall from my days working with a hosting provider:

Guy Deletes Entire Company

Yes, it's indeed possible for somebody to come up with a script that has a mass deletion command that isn't properly limited to the current directory or restricted by file privileges (in linux, Microsoft, Unix, Mac, what have you). The Admin or root permission is really powerful, and you should remember to include in the batch file or script the highly specific location(s) that any destructive command needs to work on..

But how would you architect some protections into this? One could go on and on, but the story seems to suggest a one-man operation without a staff, meaning without a professional admin, or even admin company (outsourced). Common situation. Even such an outfit as this one-man-band could benefit from two backup servers (the one in the story had an open connection and presumably its drives were mapped/mounted, therefore got the deletion command along with all other nodes that could be reached). What I'm suggesting is that, with two backup servers, you give one a regular address and the other a private, nonrouteable IP. Script a dd or other copy command from the regular server to the private server and never connect to the private server directly. That makes the first backup server a "staging area" and the second a pseudo-offline location.

Another thing to think about is to make the two backup servers physical (not virtual) servers, since if anything went wrong at those locations (and much of everything else was virtual in order to save money), you could do recovery efforts there since you'd have exclusive access to the drive(s) on those boxes, rather than on some cloud or multiple account arrangement, which would possibly overwrite data with other customer's stuff. I'm sure there are other and possibly better ideas but the simplicity of this second server as a fail-safe is compelling for this more-than-plausible scenario. And further, the backup scheme could treat the first server (staging) as holding only the last one or two backups so they'd be at hand for rebuilds and the second pseudo-offline private server would be for the archiving of a larger number of backups.

[Edit: While the above story about the guy deleting the company with a single code line has its implausibilities, I was told by a colleague about a similar situation having actually occurred a few years back. A local bank had set up much of its capabilities virtually, not just with servers fulfilling specific duties but with virtual networking as well (at least a few virtual switches). Something went wrong in accounting and the cloud provider didn't get paid on time and about 100 boxes just disappeared. That's the way it works when you miss the rent - something else happens automatically (knock at the door, late fee, whatever). In this case, I understand backups went somewhere else (hooray) but the boxes themselves had to be rebuilt and reloaded from backup by poor Mr. Admin - which was one guy, not involved with the problem but still getting the heat. Just another day on duty!]

by Pete (noreply@blogger.com) at August 03, 2016 07:50 PM

May 11, 2016

Pete Jamison

Review: The Hacker Playbook 2: A Practical Guide To Penetration Testing
by Peter Kim
1st Edition, July 2015 / SecurePlanet LLC / ISBN-13: 9781512214567

I got a copy of this text from a class I just completed, which I'll describe in greater detail when you see me in person at the meeting. What it's important to get down in this review is some descriptions of the content (which is up to date) and that those descriptions be delivered in the proper order. For some reason this first printing either didn't get enough proofreading or the setup editor screwed up, not bolding or increasing font size on chapter headings, nor even numbering them. So I laboriously found all the chapter headings and will organize my comments by them below, for your convenience if you acquire a copy of Mr. Kim's excellent book (after figuring the headings out, I just marked them on their pages and on the Contents page with a highlighter so that they became useful). Also, the descriptive conceit of the book uses analogies to game plays and strategies in American football so as to separate concepts, so keep that in mind. By the way, the "2" designation in the title doesn't mean that you need to look up the previous one; it denotes a second edition which includes all material still useful from Kim's earlier volume, with updates and additional coverage of tools and strategy as relevant to about 10 months ago.

"Pregame - The Setup" amounts to Chapter 1. This covers the physical setup needed for pen testing. He describes your lab as needing such features as virtualization software, particular VMs representing popular OS's, and mentions particular aspects of popular systems such as Powershell in Windows. Finally a "Learning" section has some early discussion of tools and strategies like Metasploitable and issues like lack of secure form in binaries.

"Before The Snap - Scanning The Network" amounts to Chapter 2. Passive observation and discovery is the first topic, noting tools like Recon-NG, 'discover' scripts and Spiderfoot. I should mention that all versions are referring to Kali Linux inclusions. Kali Linux is one of the largest and most popular security distributions and it's referred to so much here that an interesting secondary use of Mr. Kim's book would be as a Kali manual.

IN FACT, Kali Linux is so badass, go get a copy right now if you don't have it, at this convenient location.

Coverage proceeds to password lists, looked at with Wordhound and Brutescrape, then with active tools like Masscan and Sparta. Then some vulnerability tools are introduced like Rapid7/Nexpose, Teneble Nessus and Openvas. Web apps are looked at with OWASP Zap proxy, which is available for Windows, Linux and OSX. Finally, nmap, Burp and straight Nessus are mentioned.

"The Drive - Exploiting Scanner Findings" amounts to Chapter 3. It starts with a Metasploit Framework example which, while comprehensive, assumes previous experience with Metasploit and gives links as to where to get up to speed with that. Then there's a discussion of printers, Nosqlmap and Elastic Search. Then some recent issues like Hearbleed and Shellshock are described (no doubt some stalwart professionals are still vulnerable to those).

"The Throw - Manual Web App Findings" amounts to Chapter 4. There's a general intro to web app pen testing and SQL injections as such, then a generous 15 pages on manual SQL injection methods, followed by 5 pages on Cross Site Scripting. Other topics covered are Cross Site Request Forgery, tokens and fuzzing. Then Kim gives a mention of the Top Ten vulnerability list maintained by OWASP (I'll give it here for your convenience):

OWASP Top Ten cheat sheet

"The Lateral Pass - Moving Through The Network" amounts to Chapter 5. Responder is the first tool mentioned; included in Kali, responder.py looks for multicast name resolution and NetBios information, and uses the Microsoft WPAD vulnerability (there's a TechNet article on that; evidently the service's PAC file points to a config file that's wide open, if you can find it). Then ARP poisoning is discussed from the standpoint of two Kali-included tools (ettercap and Backdoor Factory proxy) and Cain And Abel for Windows. Two methods for getting network access at this point are given, with specific steps. The methods are either 1)with any credentials or 2)with Local Admin or Domain Admin account info. Two tools for manipulating the domain controller then mentioned are SMBexec and PSExec_ndtsgrab, both in their Kali inclusion versions. The convenient strategy of creating access "persistence" is then brought up, listing popular tools for this as Golden Ticket (a Kerberos crack), Skeleton Key (a domain admin backdoor) and Sticky Keys (a sort of automation of hitting Shift five times on a Windows host; this idea uses registry settings, which seems nonstealthy to me if the registry is locked or monitored in some way but this is one of Kim's favorite tools so I'll defer to his experience).

"The Screen - Social Engineering" amounts to Chapter 5. The expected phishing and wireless methods are covered, but there's also the tale of the author's actual purchase of similar domains as targets to take advantage of typographical errors when users enter domain addresses. He links to his complete research paper but also describes this very nasty idea, popularly referred to as "doppelganger domains". If an outfit uses their subdomain for email and somebody mistypes, BANG. They go to the bad guy's location instead. To make a long story short, these similar addresses can be incorportated into links in emails or into icons on which to click. Do you want to maim somebody all of a sudden? I don't blame you. Then there's the methods that involve the risk of physical access, like planting rogue access points, dropping USB sticks with hidden file infectors in the hall or parking lot, purchased devices for smart card cracking, KonBoot on USB for at-the-host password bypass and on and on.

"The Quarterback Sneak - Evading AV" amounts to Chapter 6. BDF (backdoor factory) is included in Kali and its methods of changing the functionality of normal services is described in minute detail. Next is a method for using a tool called Evade to allow another tool (Windows Credential Editor) to cloak itself from antivirus programs and snatch cleartext passwords FROM MEMORY! Neat. Then there's a discussion of Veil, a tool that hides executables from AV detection by automatically re-coding them in Python. There's SMBExec, which is a suite that can get hashes out of a domain controller, randomize or recompile things to render the observable form unfamiliar to the AV scanner (not merely changing the filename), create reverse shells, etc. He wraps up by describing some keylogging methods.

"Special Teams - Cracking, Exploits And Tricks" amounts to Chapter 7. Lots of helpful wordlist locations are given, then particular tools John (John The Ripper, JtR, JtR Jumbo) and oclHashcat are described. oclHashcat assumes use of a GPU and is mentioned as the author's favorite password cracker... although he does refer to historical use of Rainbow Tables. Specific vulnerability searching (within Kali at least) is described in the context of Searchsploit (for default queries) and the venerable BugTraq and Exploit-DB, not to mention msfconsole. Kim also gives specifics on bypassuac_injection, NetHunter (EXCLUSIVE to Kali/Offensive Security, it's an Android pen test platform) and he adds a description of building a custom reverse shell, which can get around firewalls and IDS. He also mentions three commercial products, Cobalt Strike, Core Impact and Immunity Canvas. Although Cobalt Strike costs thousands, Kim says that it's "must-have" for professional pen testers.

"Two Minute Drill - From Zero To Hero", a kind of Chapter 8, is a rundown of what's now possible to you, once you are familiar with the tools involved. From the initial stage of "discovery" (as the lawyers say), you use a fake website to spoof their Outlook Web Access page (that hole was left open for awhile on Hillary Clinton's email server, so although little was logged, it's a promising area of concern for that organization). With a Meterpreter script to add persistence on reboot for everything done so far, you'd then run PowerUser to create a new Administrator and find all the Domain Admins. Getting at least one of their passwords, the hashes are pulled from the controller and you dump the AD environment. Persist later surrepitious entry with Sticky Key and all that's left is the documentation paperwork and to invoice your satisfied customer!

I wish I fully understood everything I described above, but luckily I've got Kim's book. This provides either clear, specific steps to do what's discussed, or where that would assume previous knowledge, Kim gives links and references on where to get the knowledge. To read this book is time well spent, if pen testing is what you need to do, or need to be able to sell to a reputable actor, of course.

by Pete (noreply@blogger.com) at May 11, 2016 04:12 AM

February 24, 2016

Pete Jamison

This is the way it's done.

My congratulations to the maintainers of Linux Mint, the popular Debian-based Linux desktop distro. Two recent posts at their blog, namely

All forums users should change their passwords.


Beware of hacked ISOs if you downloaded Linux Mint on February 20th!

are examples of an open source maintenance organization admitting that there have been security issues AND getting word out with clear explanations and instructions for fixes with breakneck speed. Why can't many government and business organizations take heed of examples like this? Good work, guys/girls! This quick work is an example to us all.

by Pete (noreply@blogger.com) at February 24, 2016 01:34 AM

February 21, 2016

Pete Jamison

Stuff Learned From Current And Previous Jobs

1. In a disaster, always be the first to ask "Where are the backups?". [This presumes that you are NOT assigned to do them. If you ARE assigned to backups, immediately locate the tested and complete backup needed and be ready to transfer copies to all affected locations.]

2. On ambush calls (where you answer the phone and it's four people on speaker - business to business situation - your instant hot seat at the meeting): it's not as bad as you'd think. The call proves that there's a lot of money and credentials at the table that haven't yet figured the issue out. They may bluff or bluster but underneath, they're worried. De-escalate with tone!

3. In an actual emergency, ask yourself (PRIVATELY) whose emergency is it? That's a clue as to where the solution will be.

4. Regardless of what's happening, find out how events are being documented, by whom and where.

5. Regardless of what's happening, find out how others can follow events in real time.

6. Regardless of what's happening, find out how to verify what you're being told.

7. Find out how many others (admins, users, owners, customers) have access to the work area, and if any of the above need clear and fast warning in case of application restarts, system downtime, reconfigurations, file location changes, etc.

8. Answer this: do you have permission to do the work, and how do you prove that?

9. Be diplomatic when discussing technical documentation; you might be talking to the poor soul that had to write it. Even sketchy documentation was probably very hard to obtain.

by Pete (noreply@blogger.com) at February 21, 2016 05:59 AM

February 18, 2016

Pete Jamison

Another Clever Stall, putting off the Security Onion post yet again!

Important and weighty observations:

1. I have not (in at least 15 years) seen Microsoft Malicious Software Removal Tool find anything. No doubt this is a tribute to my top-notch adminning - or that I don't use Windows enough.

2. I see many suspicious listings for IT positions, and have for years. I wonder about, say, ones for Linux System Administrator, yet into the second paragraph creeps things like Active Directory. And it's just as unfair for that Windows Administrator offer to eventually mention that little Linux project... [translation: we need two or three people, but we'll see how far we can get the budget to go...] And it gets worse. Did you hear about the hospital that paid the ransom to get back file access this week? News stories are mentioning that healthcare companies don't spend a lot on security these days, as if we needed to be told. Looks like the two issues (security problems and responsibility creep) have the same cause: budget problems.

3. This must be a good time to be a spammer. Need a server from which to do the job? Half the servers on the air probably don't have a dedicated admin due to cost-cutting (or some outfits never having had an admin at all). There are many server/website "admins" that haven't logged into their boxes in a year. Who's watching password aging? Patching? Upgrading? Is the box even up?

4. If you need instruction, Udemy is having a sale last I heard. And they do this fairly often: many pricey courses are offered for short periods at only $10. I'm taking a few now; instructors are hit/miss but all have at least been worth the time spent.

5. And in addition to Security Onion, I'm looking at DEFT (Digital Evidence and Forensics Toolkit).

by Pete (noreply@blogger.com) at February 18, 2016 05:48 PM

August 12, 2015

Pete Jamison

True Life Adventures In Server Support

A customer opened an issue with us, saying that his server emailed him a crash report. It did, and it was included in the incoming ticket. It was quite detailed and elegant in a way, well designed and prepared... and we had no earthly idea what it meant. The supervisor didn't know. The advanced support people didn't know. For awhile, the internet didn't – until I'd searched enough random lines in the report to come up with some hits on abrtd, and later it made sense due to the customer running CentOS. We though at first that it was a system crash from some 'abort daemon' but no, the name was an acronym: Automatic Bug Reporting Tool (daemon), a Red Hat/CentOS thing. The web hits further suggested that this tool watched for application crashes, not system crashes. We figured that uptime results would confirm that the box hadn't gone down, but the customer spent another two or three tickets finding the server password...

When the password was provided in the third ticket (I'd been the only one picking these up since I'd gotten the first one and was perversely curious), I got in and uptime confirmed that the box had been up for over 50 days. Then, I checked around the logs for something to correspond with the cryptic reports from abrtd. No luck. Fortunately, the net hits mentioned cli commands for this daemon, one of which was “list”. This was the jackpot, or at least the only promising lead. It gave six incidents (little report notices of about 5 lines each) with a timestamp and the app concerned in the fail. I recognized 5 out of six as normal for a CentOS box with cPanel, but #6 was odd: something or other ending in .py, and sitting in somebody's home directory.

Even a novice would suspect something amiss. I sprung into action and nicely suggested that perhaps he'd paid a developer to write this and might indicate to us what the script did, if anything. Also, the tool would probably flag as a problem some application activity that it hadn't been told about.

Success. The customer speculated that it was a local tool and that he'd look into it. Ticket retired.

The moral of the story is that these modern operating systems are so extensively appointed that you can be blindsided by features that no one knew about – even in a roomful of professionals. I am continually amazed at what I learn just by either answering a ticket, answering the phone or just standing around.

by Pete (noreply@blogger.com) at August 12, 2015 01:39 PM

July 20, 2015

Pete Jamison

I Apologize In Advance For This Post (I swear the SecurityOnion test is coming!)

One should not have to think about this one, or write about it. I see server operations "professionals" almost five days a week that can't be bothered to lift a finger about security until after it's too late... but I've just seen something that's worse, if you can imagine that.

It appears that another major social media site has been hacked, but this one (which I won't dignify with a mention) is one devoted to the furthering of cheating. According to media reports it bills itself as “the world’s leading married dating service for discreet encounters”. In a world in which the internet is becoming more of a party line every day, the idea of establishing a central point for illicit activity would seem not to be a very good idea. But some evidently avail themselves of this.

Now, the guys I was mentioning before (Admins who slack off on security) are sometimes people who are intimidated by such simplicities as Windows Update. But simply being nontechnical doesn't mean being stupid. Often the uninitiated hire somebody else to handle that stuff. Fine and dandy. Some of them put it off until a problem impels them to hire outside help. Fine and dandy. That's one level of the problem. That's not as dumb as cheating on a spouse through a maybe-insecure site that publicly advertises its purpose as enabling illicit hookups.

This has implications for the security and server operations industries. How do you protect against a user base that pushes back frontiers in stupidity? This creates an internal threat that is dynamic (apparently getting worse by the year). Talk about a cost of doing business. This is the end of the post. I have no idea how to fix stupid (as the comedian Ron White says). I guess that making sure to do backups is the only answer, since by my logic, sooner or later (depending on how many bad hires one makes), you're gonna be rebuilding something.

by Pete (noreply@blogger.com) at July 20, 2015 07:34 PM

July 06, 2015

Pete Jamison

Two Items Actually Connected With Linux and/or Security And Technology!

First, let's get the Windows matter out of the way: I noticed something on one of my Win7 boxes that will be relevant to the security updates Redmond just pushed out. I'd seen McAfee VirusScan interfere with the installation of several KB items (updates) before (to fix it you just set the agent to STOP, do all the download/install/config stuff then set it back with START), but just experienced the same problem where McAfee wasn't on board. I'm trying out Bitdefender and realized that it has a comprehensive status control panel, on which it checks for, not only its own definition files but for things like Adobe updates and Windows Updates. In this case, when I checked for updates through Bitdefender's panel rather than through the ordinary OS updater (WU panel under Security in W7), it worked ok. Just sayin'.

Second, here's the page for a nice security-based distro some of you already know about:

Security Onion

It's based on the easy-to-use Ubuntu and includes tools like Snort, Snorby, Network Miner and Bro. I'll do a more in-depth analysis on it as soon as I kick the tires.

by Pete (noreply@blogger.com) at July 06, 2015 04:31 PM

June 22, 2015

Pete Jamison

3 Short Topics

Hmm. Several things technologic have cropped up with me lately. First, a hardware problem: I have several computers in an outbuilding, stored in less than optimal, rather humid conditions (there's A/C out there but not often running). At least one computer is 18 years old. Trying to start it up yesterday, I got a indication that the HDD could not be located (although a mouse cursor did appear successfully). I restarted and just got a blank screen. Three restarts later - same thing. How to fix it? What worked was to leave the power connected and come back in 30 min. Evidently, either something on the mobo "charged" up, or perhaps caps or resistors warmed up to a temperature they liked - or that something downstream from them liked. Startup went fine.

Another thing - a news story. Yet another break-in at some big organization compromised a bunch of information, this time lots of personal info per account, like medical data for thousands of users. What to do if that happened to you? I haven't read this in any industry publication but I'd think that it would be good to look at all data exposed, then separate the pile into these categories: what cannot be changed, what can be changed, and what is less consequential. Your balance in the office coffee fund is probably a member of the third category (although perhaps it could be used to track your activity) but the first category would include stuff like SSNs, DOBs, military discharges, traffic tickets and other matters of public record. The second category interests me. Logins as well as passwords, and even vlan's, IPs and so forth are things that don't have to be what they are. Though sometimes complex to change, they probably should be changed periodically.

And a third thing... on jobs in the tech area, there's a headline I saw recently that claimed that the amount of time that a job went unfilled went from 25 to 27 days on some average or other. Don't have much way of checking the validity of the claim, but assuming it's true, I wonder why it happens. Maybe there are fewer babies being born, maybe fewer immigrants are those of technical proficiency, maybe students are dumber, maybe teaching standards are lower... but what if it's something else? What if economic pressures on businesses and government organizations are such that fewer people have to do more work? And therefore must know more than in previous times? This would mean a greater reliance on guru-level workers that have never existed in vast amounts. Maybe. Just maybe.

by Pete (noreply@blogger.com) at June 22, 2015 09:03 PM

April 19, 2015

Pete Jamison

Be Your Own Network Administrator!

Whether it's the stories about the low-security Internet Of Things (IOT), or the stories about vulnerabilities aboard airliners running in-flight WIFI from the cockpit, it's not difficult to imagine problems from convenience arrangements that happen via connections that are always on, and that foster connections between stuff that doesn't go through people or devices that we monitor things on (but directly). That was a jawbreaker of a sentence. Lemme put it another way:

The lazy layman that turns on everything that's conveniently wireless is easy to see as heading for trouble if Ivan Ivanovich can get to his "Tax-O-Matic" files via the toaster. But just how bad could it get? Consider that the Information Technology field is chock full of security pros either hobbled by bad budgets and policies, or mediocre to bad in their job performance. Even banks are getting robbed remotely. Considering how badly the PROS are doing at safely running networks, what chance does Joe Sixpack have?

Well, there's probably no hope for the lazy ones since they don't lock their cars, either. Those who give these matters a thought at all, however, have some hope but a sobering realization: we must now be our own Network Administrators - that is, if we insist upon doing things over a network. This means two further things. First, for matters that need a network, one must know basic safe practices about routers, antivirus scanners, backups and such. Second, for matters that only optionally need a network, one should consider a non-network alternative.

Networks make lots of sense concerning computers, gaming platforms, streaming services and the legit sharing of various connections and resources. On the other hand, you can work a To-Do list in non-electronic forms quite easily. Printing out a hard copy of an item can often be avoided, but not always (certain concert tickets are in hand instantly that way, making will-call unnecessary). And as to online purchase alternatives, ever hear of cash paid in person? Sometimes big discounts are offered for that.

I'm no Luddite, longing for a return to the days of Fred Flintstone. But crappy security may make non-networked alternatives increasingly more attractive. After all, Usenet was a technology that served us well for awhile but died in an ocean of spam. Many say that that's now happening to email. If the internet becomes even more of a party line than it already is, it could become a vulnerability as such, rather than a tool.

EDIT: It's a bit untoward to end this post there... for two reasons. First, I think if the 'net becomes more dangerous than useful, it will become so only for the average and casual user rather than the pro. If one uses complex passwords, encryption (VPNs and such) and general common sense, then one should be able to do business. The casual will get bitten. Second, for the casual user, things could get worse as the hardware moves away from separate and recognizable computing devices (like CPUs, drives, desktop computers themselves, hardware firewalls, switches, hubs, routers...) and closer to the smart phone and nothing else. Most of us know someone who has already abandoned the desktop, laptop and home internet connection in favor of just the phone, never mind needing to beg access to somebody else's printer or flash drive when that's needed. Issues of routing, security and such will be even harder for the novice to conceptualize when only seeing colored buttons on a phone... but perhaps I've been too harsh on the learned, who will be likely to recognize annoyances.

by Pete (noreply@blogger.com) at April 19, 2015 02:43 PM

April 16, 2015

Pete Jamison

Top Ten Server Rental Mistakes [this week anyway]

1. "Drives can be resized on the fly without a reload."--- probably a misinterpretation encouraged by programs like Partition Magic, which just make use of unassigned space rather than actually moving a partition. Even some virtual server offerings are in specific sizes only, so as to mimic physical servers (as opposed to other storage options which might be by the meg or gig).

2. "The “cloud” is a backup."--- nope. And I have no idea why some believe this. Wishful thinking?

3. "The “cloud” is something new."--- nope. A network's a network by any other meme.

4. "Any form of storage/CPU should be as fast as I need/do what I want."--- nope. You pays for what you gets. And don't get the cheapest storage and then want HA performance or try to run PBX/telephony or video.

5. "The hosting company is my administrator."--- nope. If you aren't specifically paying an administrator, nobody is your administrator.

6. "Reboots shouldn't be necessary."--- depends. Some systems rely on it more than others. And if all you're doing is reacting to preset email alerts and not bothering to log in, who are you to know?

7. "Reloads or backups shouldn't be necessary."--- nope. You or your administrator had better save all data and configurations or at some point it'll all be over but the crying.

8. "Security just gets in the way."--- and that's what it's supposed to do.

9. "I'm in a hurry/don't write anything down/get on with it."--- nope. I'm gonna document and cover my ass, and surprise! We're recording this conversation.

10. "This is easy; you shouldn't have to research it. You should be able to do it right now."--- Dear Reader: I'm not making this one up; I've actually heard it - a logic hole big enough to drive the Hindenburg through.

by Pete (noreply@blogger.com) at April 16, 2015 05:20 AM

April 15, 2015

Pete Jamison

Audacity is audacious. [updated to Version 1.4]

[...the steps H, I and J were a bit garbled earlier; they're all better now...]

Here, folks, is a real-world case study in which the open source sound recording program AUDACITY is seen to be very, very nice indeed.

Problem: you have 500 vinyl records, a new turntable and you just got an iPod nano onto which to rip songs. The 6-year-old Mac you've dedicated to music projects due to large storage won't run the most recent version of iTunes, which controls the iPod and other Apple devices. And what files do you tell Audacity to convert songs into - and do you need a copy of Audacity on two computers, so that a PC or newer Mac can convert the files from Audacity's .aup file to something the iPod can read? And can you drag/drop onto the iPod or does iTunes control that, too - and which version of iTunes? The old on the music project Mac or on the Mac/PC that has the newer version of iTunes that can talk to the iPod?

First world problems.

The simple way would be to have either a brand new Mac or PC (or even a Linux desktop at the initial burning stage) with outboard storage and do everything on one computer.

However, as John Belushi would say, "Buuuuuuuuuut Noooooooooooooooooooooooooooo".

I insist on using an old Mac for ripping and a Win7 box for loading. Plus there's the issue of how to get out of a vintage stereo receiver into a computer (it's actually not necessary with newer turntables)... here's how I did it.

Preparation -
1. Setup A consists of conventional receiver, speakers and new turntable with both RCA line level connections and USB output.
2. Setup B consists of older Mac with whatever version of iTunes was current when unit was delivered, and newest version of AUDACITY for Mac

(Audacity is available for Windows, Mac or Linux at http://www.audacity.sourceforge.net)

and a USB-to-miniUSB cable (probably provided with the turntable) connected between the turntable and the Mac.
3. Setup C consists of recent Win7 box (with iPod optionally connected) and latest iTunes for Windows downloaded, plus newest version of AUDACITY for Windows.

Stragety -
A. Ignore iTunes on the older computer.
B. Ignore the conventional stereo during recording. Use the powered speaker system on the Mac locally to "monitor" the "live" LP output as the recording's taking place. To do that, switch on the turntable's internal preamp (this would be a modern USB-era turntable - I use an Audio Technica ATLP-120usb) for recording to the computer and switch it off for normal playback to the conventional stereo.
C. Fire up Audacity on the first computer (older Mac dedicated to music production) and hit File / New and hit the record button.
D. Start the turntable (since Audacity has begun recording you can hear, or monitor, the record through the computer speakers now), queue up before the song, scratches and all.
E. Record the song and delete the extra stuff before and after the song (which you can wipe off of the graphic screen with the mouse).
F. Hit File / Save Project, give the rip a name and put it in a file named ".aup files".
G. Put onto thumb drive or push it to the newer PC over a network.
H. Open the ripped file(s) with Audacity on the newer computer (on which iTunes will talk to the iPod).
I. Use Audacity to export the .aup file(s) to some converted file that the iPod likes ("export" is the relevant command). Put exported files into a file named ".aup conversions". I haven't figured out the trick regarding the library for Apple Lossless yet, so I'm using AAC at the moment.
J. Drag/drop a converted file (AAC in this example) from your storage location of ".aup conversions" onto the iTunes playlist window. It will load into iTunes immediately as far as the Windows PC is concerned, and iTunes syncs the song from application to player at the point when it senses that you've hooked up the iPod via its USB line.

Done. Simple, eh? By the way, I went with the iPod as a player since my car stereo has a cable connection for one, and front panel controls for when it's hooked up. So I had to try it. The car system is a bit laggy on the commands (takes almost a second, not really instant) but good enough. The bottom line here is that you can rip on Audacity for Windows, Mac or Linux and maybe other things too - but how to get the ripped file converted and into the player? That's where you have to have any computer that runs the latest iTunes and from there it just syncs the player. Just two steps, really: create the .aup project file on anything, then get that file into the latest Audacity for conversion from project file to preferred music file, and you're pretty much there.

For clarity, let me repeat it another way: To rip LP's to iPod, getting from a USB-capable turntable into a USB-capable computer (monitoring with either phones or computer speakers) can be done on Windows, Mac or Linux, since Audacity is written for those OSs. That gets you as far as the .aup project file. After that, you have to have either a Mac or a Windows computer that runs whatever version of iTunes (preferably latest, probably) talks to the iPod or iWhatever that you just bought, plus it'll need the latest version of Audacity. Once on that newer machine, use Audacity to make MP3, AAC or whatever out of the project files, then drag/drop those new music files onto iTunes' playlist, then hook up the player gizmo and syncing (loading new songs) happens automagically.

Update: I don't yet know how it does it, but you can get from AAC to Apple Lossless by first converting the project file into AAC with Audacity, loading the song via drag/drop into iTunes and THEN right-clicking on the track listing. "Convert To Apple Lossless" is one of the choices. Then delete the earlier cut. Where does it get the extra info in the file? Maybe it just turns off a filter...

by Pete (noreply@blogger.com) at April 15, 2015 05:34 PM

March 09, 2015

Pete Jamison

"The least important part of the computer is the operating system."

The titular quote above, opening this post, is something an old friend long ago said to me, a few years before he got hired by the storied Bell Labs (then known as Lucent, later Alcatel/Lucent). His point was that an OS is just something to enable human interaction, plus something that provides a context for additional applications post-setup. The real work is done by CPU, memory and other hardware. Even within the OS, the human part could be considered secondary, since the kernel does the core duties of file system management, memory management and job stack scheduling (to use antique terms). And if the kernel, needed programs and related libraries are working fine, who needs GUIs or even terminals? Put the whole thing on a card and have a remove/replace guy power down, switch for new part, power up and rock on, dude. And if you must, virtualize the whole thing, script that replacement process and you have the same freedom from interactivity (with three additional problems if you're in the cloud: hypervisor vulnerabilities, API vulnerabilities and access at the cloud provider by staff members whom you don't control).

But I digress.

I understand that personal computers are tools that allow humans to get stuff done, so I'm not telling you not to "startx", so to speak (to use a graphical user interface). After all, I'm now posting on a board related to a User's Group that is dedicated to a particular OS, which is FOR human interactivity. I'm merely advocating a concern for computing above and beyond the which-OS-is-better debate. I certainly have an opinion as to what's better or worse, but in point of fact, I use any and every system even remotely popular. That's due to curiosity as well as for professional reasons. The OS choice, while not unimportant, should be made with an eye toward ultimate use. What do you need to do? Why do you need to network from A to B? Is security a concern - or not? Is the computer for a narrow technical use or a general-use entertainment device? Is location or compatibility a concern? Is a computer or network even necessary? The OS question is only one of many questions on the way to the success of some ultimate use, project, activity or work.

by Pete (noreply@blogger.com) at March 09, 2015 12:52 AM

February 09, 2015

Pete Jamison

Customers Say The Darndest Things! {with apologies to Art Linkletter}

Here's a collection of pithy sayings and desperate pleas from your friend and mine, the Customer. I mean that seriously, since the Customer's money is green, and keeps us employed. My quotations, therefore, have not only a humorous angle, but a serious one, as they reveal the condition and situation of the Customer in ways that conversation with them outside the incident will almost always fail to deliver...

"Find out what's wrong with my network."

"Build me a network."

"See if you can hack into this address." - from a non-customer / member of the general public

"Why won't my virtual switch work?"

"What did you just do?"

"What did I just do?"

"What control panel app would get me around having to know commands?"

"Get me the log of the FSCK."

"Where are my backups?"

"This workstation has been up for months; why won't it log into your web app anymore? It can get everywhere else so the problem must be on your end."

"Fix my API code."

"Fix your API. This code should work."

"I shouldn't have to identify myself. Your security is a waste of my time."

"Why won't my server stay up?"

"What is RDP?"

No, really. All of those quotes reflect a legitimate customer need. They may or may not be needs within the scope of what you or I do, but if you can stretch the scope that restricts you somewhat, word gets around in the user community you serve that your outfit is solid and will help if it can. Word of mouth like that cannot be appraised highly enough. It's an extra mile that your competition probably won't take.

by Pete (noreply@blogger.com) at February 09, 2015 11:41 PM

January 02, 2015

Pete Jamison

Review: RTFM Red Team Field Manual by Ben Clark with graphic treatment by Joe Vest

I would not have bitten on this had a holder of a security certification not recommended it to me, but I did like the idea of a literal, physical BOOK coming out to serve as a handy reference for one-liners and other CLI needs in a contemporary context. Back in ages past, hand typed or mimeo'd manuals were the only way to transmit such crib sheets (until books like Kirk Waingrow's "Unix Hints And Hacks" came along). I understand that such things go out of date. I also understand that they're more easily available in the portable form of a file. But books don't require batteries, and you can write in them or in back of them, and they don't set off metal detectors.

But is THIS book any good? Yes. There are nine broad categories of hints, but little more organization than that, since not much is needed. There is no narrative; these are merely convenient references. And convenient they are: it immediately made itself useful in helping me memorize common ports with a simple list at about the middle of the book under "Networking" - great for future test scores, which unfairly demand memorization of what one normally finds via search engine these days. Anyway, here are the categories:










... along with references, an index and a clever conceit of plain old typewriter font all the way through.

Now, it may be annoying to some of you that elementary matters like the meaning of passwd or man are included among more difficult listings. Indeed, I didn't expect to see, under its own heading of "Updating KALI", the ordinary apt-get update and apt-get upgrade spelled out, but that sort of thing makes the book useful to the novice as well as to the more informed. For the latter group, there's things like, under "Native Windows Port Forward", the netsh one-liners that are hard to remember. There's about 20 Cisco commands all in one place, all of which I'd have to look up. And an awk-loaded nmap idea for reverse DNS lookup, which then organizes the results for clarity. And so on.

I'm happy I bought this one; it's continuing to prove useful to fill in the blanks of stuff I should have known by now, and is presented in a familiar and un-complex paper form. 4 stars out of four.

by Pete (noreply@blogger.com) at January 02, 2015 08:33 PM

November 15, 2014

Pete Jamison

Minor Antivirus Issues

It's almost too minor to mention, but at work we see it fairly often. Somebody has a Windows Server product like '08 and they can't install some recent Windows Updates. Rather than even try to Google the KB number, they call in (I'm like a cop that assumes everybody's a crook after dealing with crooks daily for 20 years - only the real dummies call in with this one. Most customers can handle it themselves) and complain that the updates won't load. This of course isn't even a Windows issue. The one I'm talking about is the issue which one discovers the second one springs into action and actually Googles the KB number.

If the KB numbers that come back from the update complaint are found to be recent security updates, check and see if the operator is running a popular antivirus product like McAfee. If so, turn the agent to STOP, do all the Windows Update permutations (downloading, installing and multiple rebooting for config passes) and then turn the agent to START. You are the hero.

[My completely unproven explanation for why the KB'z won't load is that the agent sees strings of quotes of bad code, trojans, etc. in the updates, in the act of being described to the OS so it knows what to ignore. McAfee and similar companies then eventually must engineer a way to keep the agent from thinking these strings are infections rather than just descriptions. MS probably would not want to publish how it has updates tell the OS what to ignore, for obvious reasons.]

Here's the interesting part: what enables you to fix the issue here is the ability to turn the antivirus agent off. According to one theory one shouldn't build into the agent any ability to turn off, since that gui (or control of any kind) might be exploitable. By such theory, you should build the agent like ESET Nod32 does it - without any kind of "off" control. This not only removes the danger of outside intervention but keeps the agent running through the next bootup so as to catch boot viruses. Nod32 does indeed give a temporary turnoff control that operates "until next boot", which however I suspect isn't totally off, since you want to catch boot nasties coming back around. Great. But...

I ran into the Windows Security Update KB 2345245345345whatever issue - on a machine with Nod32 on it. Solution: uninstall the A/V, do the updates and reboots, then reinstall the A/V. It was the only way - and I do not fault ESET for this, since the safety is worth the extra trouble.

Just make sure the owner of the box you're working on is buying the Mai Tais. The world is still full of people that can't even handle Windows Update with NO complications, although that thing's been around for, what, 25 years?

by Pete (noreply@blogger.com) at November 15, 2014 01:57 AM

October 23, 2014

Pete Jamison

Hello and welcome to Medium Tech Computer Rentals... how can we help you today?!

My job is so wonderful. I have experiences that you simply can't make up. Naturally, I'll falsify everything possible to protect the innocent here, but the true horror of the experience will easily remain...

So this call comes in from Customer XYZ with Credentials BR-549 or whatever and the question is about what our policy/position is on the latest discovered vulnerability in CLI tool ABC or something. I reply for the 5th time today that we are not a security organization, but we have heard that the policy of the Frapdoodle Application's maintenance organization is that patch 4524232566655 will be available on the first of the month, at yadayadayada.it or some other place that I just Googled. Call ended.

Then I almost plotzed as I thought about what had just happened. See, I glanced again at the customer credential screen and the organization title sank in. The caller had been with a nationally recognized computer security consultant. A really big name. And he'd just asked about a vulnerability currently in the news, even outside the industry. And I'd gotten the answer from a freaking search engine.

This consultancy doubtless pays the guy that called our outfit two to three times the money I make - in order that he get the answer from somebody else who thought to go to the interwebs and get it for nothing. The guy that called (plus hordes of his co-workers) quite possibly hit the cocktail party circuit advertising themselves as smarter than you or me, yet they may not be able to Google. They possibly have degrees, certifications, years of experience... or something. And got hired. And are morons.

Reminds me of the guy sometime back that took a security class in which the teacher put onto the big screen the activities of somebody in the class logging into their bank account from a school machine, with the snooping app's window up in the corner with the grabbed account and password [OOPS I'VE TOLD THIS STORY BEFORE].

I'd better just stop here, lest we all feel a bit more unsafe, what with all these professionals protecting us...

by Pete (noreply@blogger.com) at October 23, 2014 02:20 AM

October 11, 2014

Pete Jamison

So some Chief Tech Security people...

... might be scrambling, considering the latest data breaches being reported by Forbes Online. Or maybe they're not.

According to this story, malware, employee misconduct, phishing scams or whatever has recently affected either customers or third party participants in the business activities of JP Morgan Chase, Dairy Queen, Touchstone Medical Imaging and AT&T (in addition to Target and Home Depot a month or two back). What if you were in charge of Information Security at those joints? What would your day have been like this week?

Maybe it would have been crisis mode - or perhaps just another day at the office. If I were one of those people and were reasonably decent at my job, I think I'd be memo-ing quite a bit. My main memo this week would have had an email subject line with a nice version of "I TOLD YOU SO". I'd first list the things that I'd been recommending to management that our people do for best practices, with emphasis on what measures had been overruled, when, why, by whom and hopefully with meeting minutes from the discussion in which I was shot down due to impracticality. Then I'd list or link the emails that had been sent out detailing voluntary or mandatory practices that people were actually supposed to be following and to which management had actually given lip service.

So, contingencies list: call up money for the security consultant outfit to do damage control, get the important password change procedure, pull tested backups from secure or offsite location, reload sensitive systems... and regularly tell certain people "I Told You So". This last only works if one DID tell them.

by Pete (noreply@blogger.com) at October 11, 2014 10:38 AM

April 30, 2014

Pete Jamison

Job Opportunities

If you have a working knowledge of Linux server and/or Windows server operation (and networking or scripting would be plusses), take a look at the Houston Softlayer offerings at Softlayer's link to the IBM Employment page:


Softlayer is a major force in both physical and virtual server rental, second only to Google in number of servers currently under administration, and is now a division of IBM.

If your talents lean toward programming, specifically in Larry Wall's venerable PERL, give


a look. Their Houston office has openings in support and in development.

by Pete (noreply@blogger.com) at April 30, 2014 01:15 AM

April 28, 2014

Pete Jamison

Various Stuffs And Things

First of all, you may notice that I've updated the meeting place in the banner above. Meetings of HLUG are now at the Houston HQ of the maker of a popular web hosting application, WHM/cPanel (many thanks to Hal-PC for their past support).

Second, I made last Wednesday's meeting of the group (these are weekly, 6-9pm) and was very impressed at the size and quality of the turnout. The program was on the Hugin Panorama Stitcher program for linux (they have a Sourceforge page), but it was also interesting in that although there was at least one "hobbyist" project going (a user was getting printer hookup help for a recent Ubuntu load), the level of programming/scripting discussion was high, at least to me. Only about half of the 14 or so attendees were cPanel veterans and a wide variety of IT experience was represented.

Third, I also discovered that the cPanel location is not only the host of HLUG now but of the local branch of PerlMongers, the group that does a lot of coding in that manner (Practical Extraction and Reporting Language). I'll get a link for them as soon as I figure out where the Houston Branch is located in the interwebs. ALSO - special thanks to the cPanel executive in attendance who sprung upon me the news that BackTrack (the famous security distro) is now rechristened as Kali Linux. This must have happened over the last 60 days... or maybe I've been distracted more than usual!

More to come - things are happening!

by Pete (noreply@blogger.com) at April 28, 2014 01:10 AM

March 27, 2014

Pete Jamison

I was only a bystander but here's what I saw...

So this guy says he's getting a bunch of unsolicited SMTP interrogations and it's filling up his logs and requiring him to check in a lot to make sure he's still got room in that directory so as not to strangle commands and crash the server. Could we "fix it"? Actually, no, since we don't manage his stuff, but we checked into it for him by asking another department with the clearance to see more that we can. "This guy" fears some kind of DDOS but our superiors point out that the incoming SMTP connection attempts aren't in amounts any greater that 200kb per second, and add to that the fact that the server operator admits that email isn't that active. I get onto the command line at one point and watch uptime, top, df and so on, noticing not much load or activity.

Apparently there's not much to any of this... except the fact that the guy's logs have been filling up and this effect took the server down at least once. The SMTP requests are coming from many obviously bogus IP's and not actually generating mail. BUT the connections must be logged, so /var/whatever is overloaded at some point and Crash. This looks to my inexperienced eyes like an unusual form of Denial Of Service.

A few of us came up with a strategy for the server operator. First, check in often if nothing else. Second, perhaps come up with a cron job or two that will archive or dump the affected logs based on size. Third, if he has some kind of web hosting control panel like Plesk, there's probably an app that's a front end to crontab or logrotate that would easily schedule size-based dumps as above.

And it came to a head on a weekend. That's the main reason we bought into the DDOS conspiracy theory (much skullduggery happens from Friday to Sunday), but the guy probably needed to set up rotation anyway. Serves him right, as well as us all. Also, he could be misconfigured in 15 other places, in addition to Mr. BogusIPs coming at him.

The moral of the story might be that any protocol activity on any server or application THAT MUST BE LOGGED can constitute a kind of vulnerability, since if not checked, the log directories involved would fill up at some point. And if you didn't throw bogus requests at your target in multi-gigabit-per-second amounts, you fly under the radar of either the server or the network outfits. It's an effective trick, until the target figures it out - or unless the target configured logrotate effectively in the first place.

EDIT: I should know better than to gravitate to conspiracy, particularly when I recall the quote "never attribute to deviousness what can be adequately explained by incompetence"... Another explanation for the above facts (pointed out by someone with more experience in these matters than me) is that, in configuring some DNS matter for email purposes, maybe somebody got one digit of an IP address wrong... and our "victim" is getting someone else's mail through no fault of his own. Think about it: 175 kbps or so, random IP's... yeah, maybe so. It would still be a clever ploy to deny service due to reboot inducement, but again, if one does logrotate right, the issue never comes up.

by Pete (noreply@blogger.com) at March 27, 2014 01:45 AM

November 04, 2013

Pete Jamison

Current Windows Routine featuring Your Pal, TRK! (edit 2.0 at bottom)

I have several computers in different configurations, OS situations and states of disrepair. For a particular Windows 7 computer, I just went through what's a kind of a weekend routine for cleanup. Here's my checklist for such a machine, useful not only to maintain system health but also to be familiar with the maintenance tools themselves:

1. backup any/all user-created or downloaded important files

2. run onboard utilities like Disk Cleanup, Defrag, MRT (Malicious SW Removal Tool from Microsoft, which takes a long time to run but what the heck), MS Security Essentials

3. run Window/Washer or CCleaner (crap cleaner?)-type third party products for unneeded file removal if you wish

4. run Spyware Blaster by brightfort.com (keep this program around to prevent unauthorized installers from running; it's not antivirus but an installation preventer that's active only at bootup)

And last but not least:

5. use trk with clamav (run commands "freshclam" and "clamscan")

The interesting thing this time around was that I presumed no special knowledge on the user's part and simply downloaded a new copy from the Trinity Rescue Kit download page and selected the self-burning .exe file for Windows. It recognized the onboard burner and asked for a blank, then burned and asked for reboot. Presto - we're a Linux box now (as the hardware reads the burned CD's OS rather than the HDD's OS). Although I had to remind myself of "freshclam" and "clamscan" by looking on the net from a different computer since the help section on the CLI didn't mention those, the run went without issue, taking less than ten seconds to update and less than ten seconds to scan.

I also use Malwarebytes on this particular machine, and should probably do a closer look for this blog on all of these tools. I just wanted to put down this much to record the minimum of my approach for Windows-world maintenance - and I should also repeat my presentation of years ago of running all these things on an intentionally-virus'd machine. That was fun.


NOTE: an editor's addition is coming shortly, which will prove that I am not a complete MORON. The above results were obviously the ten-second results, not the LONG VERSION, which would be to run "updatetrk" and then get serious with "virusscan -a clam; virusscan -a fprot; virusscan -a bde; virusscan -a va" ... these commands update and run four different scanners (there's a fifth, Avast, which I left off since I don't have the free key for it yet) which in order are ClamAV, F-Prot, BitDefender and Vexira. They are running now; stay tuned! (this is gonna take awhile, running as they are in -uiv, or unbelievably insanely verbose mode)


ANOTHER NOTE: Ok, it got a bit more complex. I started out with a clean Windows machine (I use Windows mainly as a movie viewer) but accidentally virus'd myself by getting curious, searching for a free windows version of Vexira, choosing one of the WRONG links (pretending to be genuine) on a search and pulling in about 130 ads. That was an opportunity to run Windows solutions and see if they worked. Malwarebytes actually got almost all of them (126, I think) and they were still gone upon reboot. Further scans by Security Essentials and the protection state of Spyware Blaster didn't note anything. But running the string of four scanners from TRK as mentioned above found an interesting imposition disguised as a legit autorun file. TRK placed it into the conveniently created and named file "TRK-INFECTED". For some reason I found trk's command line difficult to fathom and in desperation found the INFECTED directory and deleted the file using the GUI from another Linux box (but in retrospect the cli was simple). The offender had been rendered inoperative by having been compressed (if there were other changes made to 'anaesthetize' it I haven't discovered what they were via the documentation yet).

So there it is. TRK finds a bad guy - which onboard Windows scanners missed - and rendered it ready for deletion. Four of the five listed scanners were used, but I'm still after days waiting for the free Avast key to show up via email. A full list of all goodies on board (unique to) TRK is at the TrinityHome page (linked above) if you go to the Documentation, then TRK Specific link...and there's an "all commands" link at the bottom of that list on the left.

by Pete (noreply@blogger.com) at November 04, 2013 03:45 PM

September 26, 2013

Kojo Idrissa

HLUG: New Location! New Plans!!

Starting tonight (2013-09-25), The HLUG Linux Lab will be meeting at a new location:


3131 West Alabama St

Houston, Texas 77098

Same time as always (6-9pm). BOLD new things coming down the pike! Go to http://www.houstonlinux.org for more information!

by HLUG Kojo (noreply@blogger.com) at September 26, 2013 01:54 AM

August 19, 2013

Pete Jamison

Going On The Offensive

Continuing with my current theme of best practice advocacy, allow me to openly attack a terrible and unintelligent tendency in computer users, from the novice to the expert. It is an appeal to laziness and an invitation to disaster: the resistance to the OPERATING SYSTEM RELOAD [I'm primarily speaking of the end user machine, but these comments can easily be applied to servers]. Why is an OS reload a good thing?

First, look at the benefits. IF one is prepared for it (with recovery or original system media or both) one gets, at the price of an hour or two, a brand new software load at no cost (which can boost performance by up to 5-10%). Or if you wish, hand the media to some tech and pay the fare and have somebody else do it.

Second, look at the protection. From hacks to lightning strikes to hardware failure to spilling beer into the tower on New Year's Eve, one is prepared for the worst. And a new copy of your system can fix dozens or even hundreds of problems you may not know were present.

Third, consider that in this era of cheapness, manufacturers are not as good at offering reload CDs/DVDs as standard equipment. They can come at extra cost, or often one is expected to get blank media and burn one's own copy with an onboard script, wizard or program; if one forgets to do this and remembers 6 months later, one can only back the system up in its condition at that time and not as brand-new. And a year after buying a computer, recovery media previously available may be discontinued.

Why are OS reloads resisted? For many reasons, none good. There's lack of preparation: to do a reload, one must be prepared to rebuild the whole system, from the base OS to additional drivers and applications, to the configuration of all of the above. If one has not documented any of this, one has work to do (that should have been already done). Then, there's the belief that it shouldn't have to be done more than once. This is uninformed; all system files become corrupt over time due to program interaction, electrical surges, disk location errors, lack of file system maintenance, accidental deletion and so forth. There are no doubt more reasons, but the point's been made.

The moral of the story is to remember that all file storage is somewhat risky. Online internet backup? Great, but there's some security risk there (better than nothing, though). Burning your user files to disk? Great, but the computer itself must be rebuilt with something. The main thing would be to save, build or burn an OS copy at the start. Buy the rebuild disk if you must - immediately. Have disks or installers for your needed apps as well, and special drivers for wireless capability, camera adapters, etc. burned to the same or other media.

All this is more than busy work. It's a free insurance policy. And the preservation of the system (and its value to your work) could be worth far more than the physical computer.

by Pete (noreply@blogger.com) at August 19, 2013 03:30 PM

August 18, 2013

Pete Jamison

In My Limited Experience...

In my limited experience with virtual computing instances, many people of all levels of technical competence miss what I consider the obvious. I could be wrong in how obvious these points are, but here they is:

1) If you have 20 virtual instances on one physical host, that host becomes 20 times more important than it was before. Other things being equal (OTBE), that's 19 more phone calls in the middle of the night than the case of only one, when a physical instance goes down. And these are computers. Go down they will. True, it's only the Administrator who gets those emails or calls, but what if that person is on vacation, doesn't answer the phone... or quits?

2) A virtual instance simulates various hardware components with files representing or standing in for them. This does not eliminate everything. This means that a FEW problems go away, like power supply or hardware-related ram failures (out-of-memory problems will still happen). Other things remain, like ALL the administration issues, jobs and potential worries. ALL OF THEM. Although it's virtual, it's still a computer.

3) I can foresee a day when virtual computers will be more stable and reliable than physical ones. We aren't there yet by any measure at all. Those who rush to virtual computing due to cost alone are slowly finding out that there's no magic bullet here.

4) It's just as easy to underbuild a virtual computer as it is to underbuild a physical one.

5) And as one is dealing with these virtual machines, add in all the other problems of networked arrangements like this ("the cloud" or "the fog"), like security issues, the it-all-goes-away-if-you-don't-pay-your-bill matter, the fact that virtuality does not constitute a backup (you still have to do that yourself unless you have a managed - EXPENSIVE - solution), etc., etc.

by Pete (noreply@blogger.com) at August 18, 2013 06:49 PM

August 05, 2013

Pete Jamison

And a simpler take on "exposure"...

Since my most recent piece on the NSA, there have been more stories of other agencies with data collection programs, plus one on the possibility of remote activation of cell phone microphones (recalling that possibility under certain conditions during the heyday of rotary phones). Whether computers or less complex vectors are involved, I'll repeat some observations I once took to be obvious...

First of all, we should remind ourselves not to do dumb things in general. Here's an article at Bankrate.com about what to do and not do:

Five Ways To Expose Yourself To Identity Theft

Secondly, remember the old movies and TV shows that depict really short telephone numbers and live operators who can overhear conversations? In principle, all communications systems can be party lines. If a conversation is to be private, keep it off of the system (whatever system takes control of the material away from you). And don't discuss it in restaurants or crowded elevators.

Thirdly, if an employer is dumb enough to disqualify you for a job or promotion due to internet remarks, perhaps they don't deserve you. On the other hand, if you're dumb enough to broadcast professionally damaging material about, connected to or from yourself, perhaps they were right to check.

Fourthly, the proper way to handle the legal aspects of information exposure is to start with the reconstitution of the Patriot Act, rolling back its more invasive aspects as mentioned in my earlier post, plus adding big penalties for official abuse of information discovered in "fishing expedition" fashion. The issues here do not concern a particular agency; the issues are about evidence collection as such.

Fifthly, remember that as above, it's not just the agency in the news right now that does information collection. Technology is getting more affordable all the time, such that state and local agencies, less-affluent crooks and even nosy neighbors can buy things with capabilities that were fantasies in 1960's spy movies.

So, don't blame the computer or the phone. We must keep track of what's possible to officials with bad judgement, what's possible in the enacting of bad law, and what's possible when you don't think before you act.

by Pete (noreply@blogger.com) at August 05, 2013 02:06 AM

June 13, 2013

Pete Jamison

Datamining: The NSA Is Not A Problem!

The recent developments regarding an NSA whistleblower who states concern over invasive practices regarding phone calls and other communications of Americans not accused of any crime deserve careful consideration. But don't make the mistake of blaming field agents - or even upper command - of the National Security Agency, the main cryptographic arm of the US Government. They don't make policy. The politicians do, via the most relevant law here, which is the Patriot act, passed during the previous Presidential administration.

The fix is obvious: keep the ability to do traditional "wiretapping" when backed up by a search warrant documenting probable cause. Rein in the newer blanket permissions that don't require probable cause. Bolster punishments for government abuse of gathered information, as in "fishing expeditions". Sure, it would be more convenient in doing search and seizure if lots of pre-gathered information were available, but that wouldn't be reasonable in the case of the innocent.

And remember what's known of the role of the NSA in helping win the cold war. These agents are soldiers, too. They've risked all and sometimes died just like soldiers who operate in the open, but could never receive the recognition that above-ground GI's get. I don't know of any problem with the whistleblower's argument (as I currently understand it) that we should be alarmed if basic freedoms are threatened, but the fix isn't to demonize operatives every bit as valuable as he. If the public gets clear on the value of our lives and persons, and the politicians don't give us the protections we and the constitution require, then we deal with the politicians, not the soldiers.

And the first thing I did in reaction to these events was not to gripe on Facebook or Twitter and not even to post this blog. It was to send $35 to the National Cryptologic Museum. This project works to build a new facility to preserve the history of the NSA, and is near the National Vigilance Park, which preserves listening post aircraft similar to those lost in service. If we forget the significance of our own heroic actions and those of our forefathers, the cost will be our souls.

by Pete (noreply@blogger.com) at June 13, 2013 02:16 AM

June 09, 2013

Pete Jamison

Hurricane Season!!

...which isn't just for the American South anymore, as NYC recently demonstrated. So along my usual lines of best practice suggestions, what would be prudent to think about under the conditions that began June 1st?...

OFFSITE BACKUP LOCATIONS - Yes, there's the net or the network or the cloud or the fog. But if data is pushed somewhere, is it being pushed only down the street where the same flood that hits you hits the data center? And if the data center is across the country or world, how much data creeps across the connection how slowly?

POWER CONDITIONING - Power is certainly a consideration when a blackout removes it. But when power is available, it could be "dirty", fluctuating in damaging patterns and amounts.

PHYSICAL PLANT ISSUES - How's the electricity situation at your site or data center? Is there a generator or secondary source? What about physical security, like the condition of the doors, gates, windows? How about the architecture and condition? What if the roof comes off? Does the place normally leak during heavy rain? (One might stop by sites during inclement weather for inspections).

LOCAL EQUIPMENT - Think about flashlights, flashlight batteries, first aid and storm kits, tools, etc. that might be good to stash around the workplace.

ALARM FAILURE - If power fails or a falling tree physically damages some asset, what's the backup plan? Email or text message communication generated by software will be problematic if the host machine itself can't operate. And is there a secondary plan for the phone contact tree?

GENERAL COMMUNICATIONS - Even if you have an old-style pulse phone in the place (they still work in storms due to their internal magneto), an old-style answering machine still needs AC power to work. And if it's a newer desk phone, it needs AC itself. Does everybody have chargers for their cell phones? Preferably in their cars?

STORED FUELS - As well as diesel or gasoline for the generator, are other fuels needed? Bottled gases or lubricants?

EMP - I don't expect nuclear attack, but other causes of electromagnetic problems are likely, such as sunspot activity or detonation of a transformer on top of a nearby telephone pole. As unlikely as it would be for most assets to need to have Faraday cage protection, some might benefit from that. Most, though, could be protected with less complex or expensive isolation or shielding.


COMING ATTRACTIONS: As soon as I can, I'll get to some projects on which to report, such as recent looks at nMap, CrunchBang Linux and TRK's latest version.

by Pete (noreply@blogger.com) at June 09, 2013 04:14 AM

October 07, 2012

Pete Jamison

Guilty As Charged

I continually discover how many simple things that we in tech always tell other people to do... are still undone by us. So yet again, I am spurred to do another 'best practices' post. But this time, the inspiration came from within the ranks of the techies. I will mention no name in connection to any such foolishness exept my own. If we have good advice, we should take it ourselves.

BEST PRACTICES, version 34.09?

Change passwords regularly and keep a secure master list offline.

Don't use the same password for everything.

Don't be needlessly repititious in other areas, like calling a host the same thing as a domain name. That can confuse people as well as systems.

Do backups of all unique data regularly either offline or on an additional disk.

Don't do "secure" work from public sources like schools, libraries, restaurants or coffee shops. If you need to do that, use encryption on an additional computer that's cleaned often.

Run rootkit hunters, antivirus, antimalware, etc. regularly.

Update the operating system and all APPLICATIONS regularly.

If your activities are tied to a particular OS that's dependent on a particular set of hardware, keep important spare parts around.

Don't get rid of your laptop and desktop, attempting to do everything on your phone. It only takes one drop to the concrete, loss/theft or accidental dive into a public toilet to ruin your whole week. Particularly when there were no backups for 6 months.

Be discreet on social media; delete unused accounts.

Keep webmail accounts lean and backed up. Use more secure options when possible - like paid as opposed to free.

Turn off communications capability when not in use. This can include computers, routers, modems, terminal units, repeaters, gaming devices, etc. To leave everything on for two weeks is a prescription for trouble.

Don't click on a link if you don't know where it's going. And never respond to bank emails; call the bank.

Check your own history. Do web searches on yourself, and for old sites, see what archive.org has on you.

by Pete (noreply@blogger.com) at October 07, 2012 05:39 PM

October 01, 2012

Pete Jamison

How to say lots without revealing sensitive details...?

Hmm. Let's see if I can indicate what I'm working on without violating any agreements... Coming back into an actual, real Unix job after so long, I'm having to review Windows Server 03 and 08... yes, right. And for which there's actually a good rationale. Not only are many customers flying that product (although not as many as CentOS plus RHEL at our shop), but many server problems are conceptually the same. A reboot's a reboot, a password change is a password change, and a brute force annoyance is the same as well in all camps. And another thing's the same: the support surprise. I found out that we were supporting WinServer 2012 when we went into somebody's new build and saw that our build technician had provisioned it. NOBODY TOLD SUPPORT ABOUT IT. We just kinda found out. One or two other products have recently "appeared" on our plate like that. But there's all the things that aren't a surprise, and that the average tech should keep up with anyway: DNS, shell, awk, log locations for server OS's and control panel products, plus networking (and internally constructed "custom" networking). And one surprise is how useful nmap has become. It's almost a whole command family of its own and doesn't any longer have the reputation for instability that it shook during the last decade. It can do much more than find a port; their documentation book is on my buy list.

by Pete (noreply@blogger.com) at October 01, 2012 01:01 AM

August 28, 2012

Pete Jamison

SOME Customers Say The Darndest Things

So I have a new job at a help desk to be named later. It's good to be back in the swing of things and most customers are actually my teachers in a way, since they know far more about their systems than do I (being tech support on rented systems in a physical sense but not being an admin since I have no idea what they're doing, in an applications/productivity sense). And not only can they be teachers. Every so often there's that provider of the unexpected request... "Have you got the log of the file system check?" I'll give you a minute for that one to sink in. Actually, I'll give you three weeks.

by Pete (noreply@blogger.com) at August 28, 2012 02:47 AM

December 23, 2011

Pete Jamison


I just turned a Windows guy onto Linux Mint and he's having considerable success with it, in spite of being a total novice at Linux. I just came up with a very few things that it might be helpful for a new Linux user to know, regardless of how easy recent distros have made things:

1. Watch the update symbol and do updates every time they're noted in the word balloon notice thingie. Updates are how security holes are controlled in the *nix world, both for the OS and for many applications.

2. Read the manual pages (man command in Terminal) on the commands su and sudo (super user). Some distributions restrict the root account, preferring that you operate as a regular user most of the time. When needed, elevate yourself using su or sudo, then giving the password requested, which will be either the user password or a separate one for 'root'.

3. Use the locate command to do really quick searches in case you lose a file. Remember that it uses a database, which is freshened up (as root) by invoking 'updatedb'.

4. Use apt-get to do fast updates from Terminal. The sequence is

apt-get update
apt-get upgrade
(y for yes to continue or to accept additional required updates)

Of course, the software updater app is there for you as well.

5. If an app, game, utility or other program is present in the distribution's software repository, apt-get is the command to use. It's simply

apt-get install (name of file)

6. Grab an outboard hard drive for backups. Remember that you'll need to reformat it in whatever file system your distro uses; the "format" command is easy to find in the GUI.

7. Speaking of GUI issues, you have choices that outfits like Mac and Windows don't give you. They only have one desktop environment, whereas Linux offers bushels of them. Gnome and KDE are the most common full-featured ones, while there are others like Enlightenment and FluxBox that cater to older and slower systems (requiring less resources to run).

Hope this helps!

by Pete (noreply@blogger.com) at December 23, 2011 09:23 PM

December 01, 2011

Pete Jamison


So my buddy gets his computer broken into (not hacked - the attacker wasn't that capable) by someone we'll call The Bad Guy. The Bad Guy got a keystroke logger somewhere and sent it to my buddy as an email attachment, which was opened in Outlook (see alternative app/OS suggestions below). Then the Bad Guy spends the next three or four weeks getting info from the logger, collecting passwords, user names, account names and such. Then the Bad Guy hits my buddy all at once (probably during hours when my buddy was known to be off shift and sleeping). My buddy's websites are trashed, his email accounts are hijacked and deleted, and many subscriptions and memberships were cancelled since the attacker could now pose as the membership-holder.

If we leave out online banking fraud (which apparently has not occurred), this situation would be about as bad as it gets. Let's make a list of everything that the victim now has to do to pick up the pieces of his online presence and certain aspects of his personal life.

0. Obtain a different, non-compromised computer.
1. Set up new personal and business email accounts (probably at a paid provider as opposed to at some free service), then inform important contacts via telephone or web forms on their sites.
2. Set up new Facebook, MySpace, LinkdIn, Classmates.com or whatever pages and rebuild all contacts lists.
3. Find account numbers, financial transaction records or bills that prove my buddy's identity to the website hosting company. Re-establish access to the website and rebuild it from scratch (all files deleted and replaced by garbage).
4. Image the hard drive of the compromised computer for later reference and/or legal action if desired and possible. Wipe the compromised computer and rebuild from backups, from OS/driver CDs/DVDs if such exist, or by paying the computer maker or authorized repair place to do it.
5. Bring back all personally created documents and work from backups if such were done. Bring back all application programs from install CDs if possessed.

My readers know me as a Linux guy and will anticipate certain suggestions of mine, but let me think like a Windows guy for a bit. I'm confident that I can do this, and am motivated to do so since many of my colleagues spend most of their time in that world. Now, if I'm a Windows person, I would still need to change my ways somewhat. Let's see how adaptation is possible. My buddy is like most people in that he's only concerned with the work that he needs a computer to do, not with security issues on the machine itself. That's how the problem starts and here's a proposed new strategy:

1. Use the original desktop or laptop as the general use/everyday/casual computer. Get another computer for important stuff. And consider a third one, properly Frankensteined for gaming (let's be realistic here).
2. On the general use computer (and the other ones if they're Windows boxes) load an antivirus product like Norton, McAfee, free AVG or such. Load an anti-crapware product like Malwarebytes, AdAware or such. Load a third party cleanup program like Window Washer or the free CCleaner.
3. Update and manually run on a minimum weekly basis all of the programs I mentioned above. In many senses these programs are not automatic; they have to be operated by the user. Various automatic settings are sometimes included but when viruses are automatically found, they must then be deleted by user action (usually prompted by instructions that pop up - they're easy - follow them!) Windows Update also prompts you. Do all updates - they're free bugfixes and don't take much time.
4. If any of these antivirus or anti-crapware products find anything, get onto the other (Important Stuff) computer and see if you can still get into your accounts. If so, change the passwords, going down a roster of accounts that you've stashed for just such an emergency. You may not be acting in time unless you only log onto important stuff with the Important Stuff computer.
5. OPTIONAL - If you wish to investigate, keep the compromised computer in its compromised condition to be examined or for the hard drive to be copied (imaged) for future reference.
6. Rebuild the compromised computer from operating system and driver CDs/DVDs that you've archived in advance, either from the computer purchase or some backup scheme like Norton Ghost or a competitor of that product, or one of the Windows open source (free) backup products.
7. Consider making the Important Stuff computer an Apple Macintosh, or loading a box up with an easy Linux system like openSuse or Linux Mint, which is now available for free, fast download at


You knew that was coming.

I understand that if you require Windows for work files or development, then you might not have the open source option. But just about anything web-based is child's play for a *NIX box. And the two Linux distributions I mentioned above have good, bright update indicators on their desktops (updates are how most security holes are dealt with in Linux) that'll make you pretty safe. Unless you've pissed off the Defense Intelligence Agency or the Russian mafia.

by Pete (noreply@blogger.com) at December 01, 2011 09:19 PM

September 30, 2011

Pete Jamison

Book Review: THE CUCKOO'S EGG by Clifford Stoll

It's the Reagan administration and although your government research job is unrelated to computer desktop and network maintenance, you've had several such duties deposited into your lap, which became several more. Now you're responsible for solving a summation problem in the accounting program that charges other departments for using time on your system (remember the earliest incarnations of 'time sharing'?). The issue is that there's a bit of time that's been used and no one's paid for it. Digging into the relevant sessions, you notice an account that's been used recently, but was set up for someone who departed for another job two years ago. Questioning of that person and others around him or her deletes the original owner from suspicion. You begin attempting to find out how the current user of that account figured out how to use it - and from where...

Clifford Stoll's classic story of involvement in serious espionage messes begins in this massively plausible way. It was recommended to me by an NT-era MCSE; I proceeded to enjoy the book in spite of myself. But although I'd heard from various (inaccurate) sources that it involved "viruses", I was at least a little surprised to read the following lines early in the story:

echo -n "LOGIN:"
read account_name
(stty -echo; \
read password; \
stty echo;\
echo "";\
echo $account_name $password >> /tmp/.pub)

This isn't a random vandalism attempt and not a program that's attempting to replicate over adjacent machines. Somewhat obviously, it's a password grabber that appends the grabbed string to a file and then allows the user to proceed to the real logger-inner. It's a purpose-built program to harvest logins for the sneak thief that's arrogated control of an old account to him, her or its self. It was placed there by whoever broke and entered.

Stoll made the commendable decision not to interfere with the sneak, but to observe the sneak's actions over time. Time revealed that although root was attained, backdoors were installed and files were read, not much time was spent actually reading the files. The sneak proceeded over the LAN to other computers and did the same. The sneak wasn't so much interested in the data (which was downloaded by him) but in the network - the connections over which he could travel to other destinations. And they weren't just unclassified science projects like Stoll's astronomy research post; they mostly included sensitive stuff. Stoll began consulting with the three-letter agencies and the plot thickened.

What follows is not only a story of network research in the days before the consumer internet (which is not a terribly large part of the story) but a common litany of woes regarding having to deal not only with spooks that aren't at liberty to say whether you're full of baloney or not, but with layers and layers of middle management that have no power to say 'yes', but plenty to say 'no'. Without giving much away, I'll say that Stoll laboriously traces the sneak out of the building, out of the organization, away from the West coast to New Jersey, over the transatlantic cable and, modem by modem (that's Modulator/DeModulator, junior) into a situation that begins to address diplomatic issues among countries both allied to us and... perhaps not.

There would be many morals here. The first one that occurs to me is not to let responsibility creep get you thrown into prison, let alone rooked out of a fair paycheck. Five stars out of four.

by Pete (noreply@blogger.com) at September 30, 2011 05:00 PM

June 09, 2011

Pete Jamison

Linux Links

Here's some of my favorite links on a favorite subject...

A distribution specifically geared to repairing Windows installs.

Follows new releases and updates of old standards.

Volunteer organization that answers quandries and categorizes them for others to reference

a distribution recently noted in Linux Journal for effectiveness in backup and mass rollouts

A most valuable authority with a helpful website as well

A distribution full of tools for penetration testing and data recovery, etc. etc.

The only game you'll ever need!

by Pete (noreply@blogger.com) at June 09, 2011 12:09 AM

Here's a quote...

...that may express some of my thoughts on programming and on Linux well:

“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.”

- Charles Darwin

by Pete (noreply@blogger.com) at June 09, 2011 12:07 AM

May 14, 2011

Pete Jamison

Got the call again...

And by that, I mean the call from the Windows User to the Computer Person to fix something undetermined. And she was cute, so one will agree and mentally review the steps involved in hopefully saving the day. It's a 2-yr-old-ish Vista laptop and this time, I'm not just doing updates but expecting the worst in the way of poorly written Russian adverts and so on. I'm picking it up on Thursday but forged ahead by downloading a new build of Trinity Rescue Kit.

TRK 3.4 is a Linux distro with a payload of tools aimed at Windows repair. The star of the show is a command-line version of ClamAV. I'm assuming viruses have set up housekeeping, so I'll give it the full treatment (boot from the TRK cd and go online for latest signatures, then run against HDD for three or four hours). But after that's done, the usual list will kick in:

0. Ideally, do backups as soon as one gets to a stable desktop (hope springs eternal)
1. Disk Cleanup and chkdisk (file integrity check)
2. Defrag
3. update and run the anti-junkware (Spyware Blaster, SuperAntiSpyware or whatever)
4. update and run any onboard brand-name Antivirus (or download AVG or something if there's nothing already there)
5. OS updates
6. browser updates
7. Flash update
8. Acrobat reader update
9. any other app updates available

That's the seat-of-the-pants list. Better lists exist; this one's probably all I'll have time for this week. Unless she's really persuasive.

RESULT: The laptop didn't even boot, so she let me have the carcass for parts and the effort (I got an HDD and some memory out of it) and she went with a new netbook. Good to reacquaint myself with TRK, though.

by Pete (noreply@blogger.com) at May 14, 2011 04:03 AM

May 13, 2011

Pete Jamison


The ability to generate software-only instances of an operating system (virtual systems) promises freedom from hardware concerns and an endless amount of no-cost capability, right? Not exactly, but it holds great promise in certain circumstances. Here's my fence-sitting take on the positives and negatives of this new technology.

1. Reduced electrical cost (less power supplies to feed)
2. Reduced equipment inventory and space
3. Reduced spare parts inventory and space
4. Simplified rebuild administration (ability to spawn new instances quickly)

1. Heavier hardware requirements for remaining computers
2. Ability to run more systems = greater demands on the admin's time
3. Related to above: possible management denial that virtual machines need attention
4. Related to above: multiplicity of installs may lead to security holes

Reduced cost and space has been the main attention-getter. The more purely electronic and virtual an organization, the less material maintenance is needed on all fronts (not that one could totally get away from it). On the downside, an operation might need newer computers and networking to handle the throughput and ram demands. And responsibility creep might chase away admins who see more systems assigned to them while a manager might be tempted to overrule objections since the new machines (that require configuration, patching, security analysis, monitoring etc. etc.) are "only virtual". And if many instances of disparate OS's are forgotten about after a few rounds of testing, a break-in artist could find abandoned and unpatched opportunities for setting up shop.

I think the obvious conclusion is that whether or not virtualization works is a function of the skill of the administrator(s). To a lesser degree it's a function of whether or not the hardware STILL present can handle the loads. And we should all remember that if more systems are running on fewer devices, then backups (both software and spare turnkey machines) are rendered even more important than they've traditionally been, which is a lot.

by Pete (noreply@blogger.com) at May 13, 2011 08:24 PM

May 12, 2011

Pete Jamison


The ability to generate software-only instances of an operating system (virtual systems) promises freedom from hardware concerns and an endless amount of no-cost capability, right? Not exactly, but it holds great promise in certain circumstances. Here's my fence-sitting take on the positives and negatives of this new technology.

1. Reduced electrical cost (less power supplies to feed)
2. Reduced equipment inventory and space
3. Reduced spare parts inventory and space
4. Simplified rebuild administration (ability to spawn new instances quickly)

1. Heavier hardware requirements for remaining computers
2. Ability to run more systems = greater demands on the admin's time
3. Related to above: possible management denial that virtual machines need attention
4. Related to above: multiplicity of installs may lead to security holes

Reduced cost and space has been the main attention-getter. The more purely electronic and virtual an organization, the less material maintenance is needed on all fronts (not that one could totally get away from it). On the downside, an operation might need newer computers and networking to handle the throughput and ram demands. And responsibility creep might chase away admins who see more systems assigned to them while a manager might be tempted to overrule objections since the new machines (that require configuration, patching, security analysis, monitoring etc. etc.) are "only virtual". And if many instances of disparate OS's are forgotten about after a few rounds of testing, a break-in artist could find abandoned and unpatched opportunities for setting up shop.

I think the obvious conclusion is that whether or not virtualization works is a function of the skill of the administrator(s). To a lesser degree it's a function of whether or not the hardware STILL present can handle the loads. And we should all remember that if more systems are running on fewer devices, then backups (both software and spare turnkey machines) are rendered even more important than they've traditionally been, which is a lot.

by Pete (noreply@blogger.com) at May 12, 2011 05:20 PM

November 09, 2010

Pete Jamison

Hardware Adventures

I just got a two-year-old off-lease Dell Optiplex GX520 (one of that boxy-looking line of desktops) and when it arrived I prepared to blow away the included build of WinXP for something else, since I already run that system for purely educational purposes and didn't need another instance of it. I got a surprise as I ran the system before erasing it. The DVD-ROM drive didn't work. So how to load a new system onto the box - not to mention finding the solution to the malfunction (assuming no flash drive load or network install)?

This is an important question for the open source community, inveterate OS reloaders that we are. The Optiplex line is huge and it appears to me that there might be a motherboard issue here that will be an issue in Linux loading - but I'm getting ahead of myself. Here's what happened as I examined XP performance and loaded (full disclosure here) Win7. [Again, for educational purposes! You can't criticize if you're not experienced.]

From an educational source I'd gotten the Win7 upgrade disc and it was to load on top of the XP load present on the 520. Remember, XP either didn't have a driver for the DVD-ROM or there was a plain hardware failure (a Dell driver CD was not provided with the purchase). I proceeded with the Win7 loading by using a USB outboard DVD burner (with proper BIOS setting for boot-from-disc) and then did the updates. The original inboard DVD-ROM still didn't work after a few reboots.

But after a few MORE reboots and the phone ringing in the other room, I returned to see a word balloon in the bottom right corner of the screen announcing a driver update from somewhere (presumably Windows/MS Update). I allowed it and the inboard transport instantly worked.

If a particular piece of software is needed for the box to see and run the ROM drive, this may indicate a need for the open source OS's to be amended accordingly. On the other hand, maybe there exists in the Linux, BSD and/or OSX worlds a generic driver that works just fine. I don't know. What I do know experimentally is that Win7 eventually recognized and updated the issue itself, without my consulting the Dell site for drivers... which I might still need to do for other issues.

I don't recommend staying away from Optiplexes. I'm just putting this your-mileage-may-vary note out there. It's very possible that the problem has already been solved by all concerned parties and there really aren't any reloading problems here. I'm just surprised that Win7 either knew what to do or figured it out. I hereby grudgingly give credit where credit is due.

(Update of 11/8)
So the driver still appears, goes away and then reappears. But the cause is now known. I looked up the Optiplex model on Dell's support site and it turns out that there is no official Windows 7 driver for this model's transport - but that doesn't keep Windows Update from trying. So I'll just continue to use the outboard unit for burning and trust in some programmer in Austin or somewhere to come through for me eventually...

by Pete (noreply@blogger.com) at November 09, 2010 03:35 AM

October 01, 2010

Pete Jamison

"A man's gotta know his limitations." - a famous Clint Eastwood character

Radio ads are now being heard in my area touting something called "Xfinity Home Security" from Comcast cable. The idea is that it would be really great to be able to control a home security system from your smart phone. I'm afraid this indicates that online convenience will soon prove more popular than security, even within the security products market - truly a new irony. If Comcast saw a potential market for such a thing, consider the possibilities...

Assume that the application security issues are solved (the code is secure). That leaves out any problems with the remote interface and the installed app at home. Let's also assume that the Internet Service Provider's servers are properly maintained and in secure configuration. And since this is an extension of ADT home security, we'll assume that there are no problems on their end (any and all physical and software interfaces). What does that leave? Well, at home, it leaves the thing that the (secure in this example) stuff is installed TO, namely the home computer. This is a computer not maintained by Comcast or by ADT or by the ISP (although their systems are possibly impeachable in other examples). This computer's Administrator Of Record is Joe Sixpack, probably with a few wife/kid computers on a $79 router. The last such computer that I looked at for somebody was as secure as a screen door on a submarine.

[Observe: To get into secure app XYZ on user's computer, obtain access to the base Operating System. Since this machine is running a series of security devices, assume 24/7 online stance. This gives you lots of time to run a... "password recovery program". Or to look for cached login info from OS to app. Bingo. You get into the secure app from the OS that the home user failed to configure correctly or patch regularly. This resembles the MANY examples of an outfit getting invaded through a telecommuter workstation left up and online.]

{Another aside: If you replaced the user machine in the above scheme with an appliance, that would eliminate my main objection. Not that appliances haven't had updating and hardening problems, though...}

The larger point? We from the Unix/open source side would not immediately have these problems since all the linking apps aren't available for Linux, Solaris or such. But the problem that remote access for home security poses is that it's not the systems that pose the biggest chance for failure. It's the judgment of the user. It's what the user has authorized to run locally. It's whether or not one should have remote access to things like history logs that record when a door was opened/closed over the last 60 days. Or streaming video in the garage or kitchen. Linux or OpenWhatever can solve lots of problems when replacing systems less secure, but it can't make the user smarter. This is our Limitation as open source advocates: open source can't fix everything. Not only must we evangelize about some system choice where possible; we must warn against giving out one's email address too much, or clicking on a strange web link, or opening that attachment. Those are problems in any system configuration, use or policy stance.

by Pete (noreply@blogger.com) at October 01, 2010 04:31 PM

September 27, 2010

Pete Jamison

User Experience Report: OpenSUSE LINUX 11.3

Briefly, here's how I've spent some of the last two days: on a 2004-vintage 2.7ghz Celeron (HP) with 3/4 gig of ram, I've kicked the tires on openSuse.org's latest offering and there's good news and bad.

On the bad side, around a half dozen reboots were required for all the hardware detection to appear to have completed satisfactorily. Also, in spite of the networking section getting out to the internet after a boot or two, the app that looks for updates still hasn't seemed to have found any. I'll try the many command-line options soon out of impatience.

On the good side, the interface is slick and many included apps perform acceptably well. Audio CD played back with no reconfiguration, startup chimes and confirm tones were present and consistent, reboot time sped up after a few instances (it might be saving last-known-good-state info in many places), and OpenOffice was pretty quick and stable even during first use setup.

Too early to tell, really, but at the moment it looks like I'd rate 11.3 as similar to other recent offerings from the Novell/SUSE camp: a worthy contender but needs polish. I've not had quite this many bumps even from recent Fedoras, let alone Linux Mint or other Ubuntu variants. Another thing: I'm covering the simple desktop stuff for a reason. If one wants to impress the Windows people who are coming over out of curiosity, the simple stuff has to work. I realize that I could use rpm or yum command lines and that it might be better to do so. But if an Updater Applet is right there in the tray waiting to be used, it should work. And in 11.2, it DID.

Next project will probably be to get the newer laptop thrown together and install BackTrack 4 to the HD and frolic through that famous and extensive tool collection. Oh, and the Celeron 2.7? Back to CentOS!

by Pete (noreply@blogger.com) at September 27, 2010 12:14 AM

September 23, 2010

Pete Jamison

This Is What I'm Talking About

Here's a quote from Chapter 5 of the Eric Raymond book (The Art Of Unix Programming) that I'm reading without actually being a programmer:

"In the following discussion, when we refer to “traditional Unix tools” we are intending the combination of grep(1), sed(1), awk(1), tr(1), and cut(1) for doing text searches and transformations. Perl and other scripting languages tend to have good native support for parsing the line-oriented formats that these tools encourage."

This is from the 'Data File Metaformats' heading in the chapter entitled "Textuality", touting the value of things being done as widely-understood text streams as opposed to the use of relatively more cryptic closed methods.

Simply referring to such a concept of "traditional Unix tools" helps reinforce what newcomers to Unix methodology are seeking to learn. This would also hold up in greater relief the point of contact between the Windows command line and Unix, as tools have been ported from one world to the other, and can be available for use in both.

by Pete (noreply@blogger.com) at September 23, 2010 05:42 PM

September 07, 2010

Kojo Idrissa

First Tuesday = Linux 101!

It's the first Tuesday of the month and THAT means Linux 101! Come to HAL-PC and learn from Charles Olsen, a member of the HLUG SIG and one of the hosts of the Mintcast podcast!

You KNOW you want to...

by HLUG Kojo (noreply@blogger.com) at September 07, 2010 06:33 PM

August 10, 2010

Pete Jamison

Some Resource Links For Yez

Dell Vostro V13 laptop (Ubuntu option available):

System76 desktop, laptop and server models w/Ubuntu:

Are you a business and want SUSE with support included (BYO hardware)?

Note: I hold no stock or other financial interest in any of these concerns BUT I WISH I DID.

by Pete (noreply@blogger.com) at August 10, 2010 05:23 PM

August 09, 2010

Pete Jamison

Kinda Linux But More Of An Open Source Posting (as usual lately)

...But linux IS involved! My past as a users' group president in Linux Land is not at all being denied here; I just digress lots. See what you think:

Our pals at LINUX JOURNAL put out this word about the OpenSolaris issue not long ago. It's not an alarmist article; it raises legit questions about whether Oracle values Unix dreadnought Sun's legacy or not. Well, aside from the decision whether or not to continue making certain code chunks widely available through OpenSolaris, Oracle seems to be trumpeting the virtues of the main event clearly enough.

Now there's this. The Illumos initiative may save the day.

My take on all this is that within the OpenSolaris orbit, projects like Nexenta were valuable to Solaris for general Unix PR, as well as for its own products. Nexenta in particular could get Linux people interested in toying with things like ZFS if only due to the Debian tools included, not the least of which is apt-get (and it works!! at least on SOME of my loads). This appeals to my if-it's-Unix-it's-cool ethic. The problem with continuation of the OpenSolaris project would be that Oracle might not be so forthcoming with the big section of proprietary code that's in OpenSolaris and that Oracle now owns. Illumos would solve that quandary if successful. I usually worry about distro forks, but this might be a good one - if one can even call it a fork.

ON A RARE PERSONAL NOTE: Some of you know that I work for an outfit that has a "boonies" location and a "Central" location and that I've been in the boonies for awhile. I AM NOW at Central! Thank you for your discretion.

by Pete (noreply@blogger.com) at August 09, 2010 10:48 AM

July 21, 2010

Kojo Idrissa

It's Wednesday!

That' means Linux Workshop at HAL-PC from 6-9pm. Bring your computer and learn about Linux!

Also, don't forget about Technology Bytes!

Technology Bytes:
8-10p CST 90.1FM on your radio. Or take your web browser or stream catcher to www.kpft.org or http://www.geekradio.com/

by HLUG Kojo (noreply@blogger.com) at July 21, 2010 07:28 PM

July 15, 2010

Kojo Idrissa

O'Reilly Ebook Deal of the Day: "Python for Unix and Linux System Admin"

If you're reading this, you have SOME interest in Unix and/or Linux. At least, I hope so. So, how could you pass up this title for only $9.99?!? I couldn't. It helps that I also got "Learning Python" for $9.99 not too long ago.

Use discount code: DDPUX

Today only! Sorry I didn't get this up earlier.

by HLUG Kojo (noreply@blogger.com) at July 15, 2010 12:16 AM

July 06, 2010

Kojo Idrissa

Linux 101: TONIGHT!

It's the first Tuesday of the month and THAT means Linux 101! Come to HAL-PC and learn from Charles Olsen, a member of the HLUG SIG and one of the hosts of the Mintcast podcast!

It's the perfect place to start learning about Linux!

by HLUG Kojo (noreply@blogger.com) at July 06, 2010 07:57 PM

July 01, 2010

Kojo Idrissa

Ebook Deal of the Day – Only $9.99 Learning Python, 4th Ed.

Not a commercial, but I thought this book (and the last one I mentioned, Linux in a Nutshell) would be of interest to people who check out this site or the Houston Linux site.

Here's the book's page:

Python is a GREAT language, used in lots of places, including Google and NASA in Houston (Johnson Space Center)
Use discount code: DDPYT

TODAY (7/1/10) ONLY!!

by HLUG Kojo (noreply@blogger.com) at July 01, 2010 07:25 PM

June 29, 2010

Kojo Idrissa

O'Reilly e-book deal of the day: Linux in a Nutshell: $9.99

Go here:

Use discount code: DDLN6

Today only!

by HLUG Kojo (noreply@blogger.com) at June 29, 2010 07:36 PM