Still seeking a robust Bento replacement.

Still seeking a robust Bento replacement.  It’s been five years since the last update (Mar 16, 2011) and about two and a half years since it was discontinued (Sept 30, 2013).  And yet, two things still surprise me.  (1) There still isn’t a feature complete replacement product, and (2) I’m still using Bento.  It still works.

The short list of leading Bento replacements: 1Password, HanDBase, or TapForms.

* as of 2016-01-25, FileMaker would still be a >$350 buy in, require dev work, incur heavy “ease of use” penalties, and still leave me exposed to the poor Apple-FileMaker long term risks.

2016-01-25: tried/purchased 1Password and was left frustrated by missing features.

The 2nd worst of all is: any “schema” changes are only “per record”… i.e., adding a field only adds it to the record being currently edited… it’s doesn’t change the underlying table structure… because… they don’t have an underlying table structure… they don’t have a record/table/db schema… each record is just a bag of bits.

The #1 worst problem is: after customizing the fields for a record, and trying to export the record, the result is nearly gibberish.  It would be very labor intensive to create my preferred “schema” in 1Password and then subsequently export/migrate to even a basic spreadsheet.

2016-01-25: evaluated TapForms and elected not to purchase those apps.

The MAC App is $34.99, 13.4MB.

The iPad app is $8.99, 29.9MB.

The iPhone app is $8.99, 32MB.

From the support forum, the developer has been responding to “wifi-sync” requests, with “it’s four or five months away”… but he’s been saying that for a year.  Until he gets that option figured out, TF is a non-starter.

2015-03-01, really need to find a replacement before the Bento apps quit working altogether.

Must have features: wifi sync, ipad forms.

1Password is iOS universal, HB and TF are not.

The HandDBase folks are leaving basic features, like form design, out of the MAC App.

** 2015-10-10, HDB pulled out of the MAC App Store over some little bitchy thing earlier this year.

1Password has too many integration points with too many things; it’s a high risk product in the long term.

** 2015-10-10, 1P began requiring IOS 9 less than a 1 week after Apple released the new OS. No backwards compatibility at all!

TapForms may become the defacto choice at some point… but I’ll wait a bit longer (Bento is till working today).

** 2015-10-10, TF is only syncing thru iCloud or DropBox (not an option for secure content).

2015-10-10, still looks like a DIY custom app is my best option…

Advertisements

Was “IP enabling” the OS Kernel, System Libraries, and Application Frameworks a mistake?

For the impatient reader… I’ll cut to point.  Yes, it was a mistake.  We are well beyond the point where a seemingly good idea has been taken to excess and evolved into a bad implementation.

How did I come to this conclusion?

I spend a lot of time evaluating applications (mobile, server, network elements, desktop, mainframe, and “other”).  I frequently encounter problems where a component was included in such a way that the software simply won’t work without a framework or library that should have been considered optional.

Frequently this occurs during a vulnerability assessment where an application is installed on a server which is denied GUI capabilities [a lot of developers hard code a “# Include” for GUI libraries even when providing command line capabilities, even for software target Unix servers].

I’ve also been encountering a lot of mobile apps which only provide single-user, offline features and have no use for or need of network communications capabilities.  Unfortunately I have to do a lot of extra work assessing these applications because OS Kernels, System Libraries, and Application Frameworks have all been “IP Enabled” by their vendors.

Consider this Apple iOS situation… If an application does not include the CoreLocation Framework, I can reasonably assume it’s not likely to use my GPS or location information and I can spend less time looking at those issues.  However, even a Notepad or Solitaire App has unfettered usage of NSUrl.   NSUrl is a primary mechanism for reading and writing files, both local and remote.  So I spend a lot of time looking for remote communications activity in Apps which shouldn’t even include the capability.

Some may believe the CFNetwork framework is required for network communications in an iOS App.  It’s not. The CFNetwork is only required when the developer wants to interact directly with the protocols at a lower level.  APIs like NSUrl are fully capable interacting with a wide variety of media (file) types from most anywhere; and it is not limited to the HTTP protocol.   

As we slowly move towards IPv6, and situations where devices will have a multitude of IPv6 addresses, the ability to distinguish desired communications activity from undesirable will get even more complicated.

Apple’s iOS NSUrl API isn’t the only example of this, it’s just one which a lot of folk are likely to recognize today.  In reality, most modern operating systems and bursting at the seams with these sorts of IP Enabled “features”.

So how did we get to a point where the OS Kernel, System Libraries, and Application Frameworks are so “IP Enabled” that the same API is used whether reading a local text file or a remote file (of any kind whatsoever)?

Explaining this situation may need a little history review…

Once upon a time, computer systems (and networks) did not speak IP.  There were numerous other communications protocols and each had it’s strengths, weaknesses, and appropriate applications.

During the 90’s, in the early days of what most people now recognize as the Internet, vendors of operating systems, programming languages, and application development tools embarked on an industry wide effort to adopt IP as the primary communication protocol for their products.  At the time, this seemed like a great idea… nearly universal interoperability.

* Although it was an industry wide effort, it wasn’t particularly coordinated or thoughtful on an industry scale.  Some folks tried to provide some thoughtful leadership, but mostly it was a Cannonball Run of vendors scrambling for an anticipated gold rush [which led to the industry’s financial implosion in the early 2000s].

Looking back, the effort occurred as a two phase process.  During phase one (of the great IP adoption), the product’s core functions continued to use previously existing internal protocols and an “IP Stack” was added to the product.  From a customer perspective, this usually satisfied initial expectations and requirements.

However, vendors felt competitive pressure to optimize their products.  Remember, the early to mid 90’s were the time of computers measured in Megahertz CPUs, 1MB of RAM was a high end system, and storage media was often measured in Kilobytes up to a few MB.  Network communications often utilized modems with speeds measured in bits per second.

Internally, CPUs and software application need to be able to pass information around.  Within a single application (or process), this is often done with memory pointers or some equivalent.  However, between processes or separate applications there needs to some communications protocol.

When systems or applications used one protocol for internal communications and then translated data to IP for external communications, many felt this translation process was too slow and consumed to many resources.  Another customer frustration rose from the initial practice of vendors shipping their product with only it’s native internal protocols and requiring customers to obtain an “IP Stack” from a 3rd party.  In the early days of Windows 3.x and even the initial version of Windows 95, it was common for the installed operating system to only contain a couple Microsoft LAN protocols.  IPX/SPX (Novell), SDLC/HDLC (IBM), AppleTalk, and TCP/IP all required installation of 3rd party software which Microsoft provided little or no support for.

In the mid to late 90’s there were many products available which provided multi-protocol translation services to both desktop operating systems and servers.  It was common to find “Multi-Protocol Router” products, usually software gateways, available for establishing (and controlling) communications between an organizations WinTel, Apple, Mainframe, and other environments.  These multi-protocol router applications could also serve as gateways and stateful application firewalls between internal environments and/or external EDI networks or the Internet.  Many similar products were available to the desktop for print gateways, internet proxies, access to EDI networks, remote dial / desktop control, and other services.

Amazon, Netscape, and Yahoo all came on the scene in 1994.  A lot of early investment were being made and many technical, economic, and social changes were coming together to increase demand for Internet technologies, products, and services.   And that demand was growing in both consumer and corporate markets.

So… it’s the mid to late 90’s.  All indicators are starting to scream that this Internet things is going to be big.  A lot of good multi-protocol technology already existed for getting people and systems connected to the Internet.  But system performance and customer satisfaction was still poor.  Vendors were shipping multi-processing systems, some multi-processor systems, and multi-threaded applications and customers were loading up more applications than anyone expected.  Web sites were emerging and growing faster than bandwidth and modem capabilities.  Vendors were scrambling to get in on the gold rush.  And the customer experience often perceived the multi-protocol stacks as performance bottlenecks and/or a source of many system errors.

In reality the problems often had more to due with poor thread management, synchronous queuing, and applications which generated excessive chatter or errors without actually crashing themselves (a misbehaving background task can easily convince a non-technical user that his foreground application isn’t working correctly).

In reality, many of the technologies available in the late 90’s simply were not ready for mass market. Many products were well suited to their intended task and performed quite well for organizations with appropriate support and reasonable expectations.  Unfortunately… reality, appropriateness, and reasonable expectations are seldom priorities when it comes to mass marketing to consumers.  Many technologies were sold to the consumer (and small business) markets before the products were sufficiently robust or stable.  Revenues flowed, advertising dollars and customer perceptions overwhelmed technological realities and in fact perceptions often became the effective reality.

As a result of these and other factors, the industry entered phase two (of the great IP adoption) as vendors began a rush to “IP Enable” their operating systems, system libraries, and application frameworks.  Across the industry a lot of products were being redesigned and code was being refactored.  Engineering priorities often included improving (or implementing for the first time):

  • multitasking – ie., running (or appearing to run) multiple applications at the same time
  • multithreading – splitting applications into multiple processes.  typically some work is sent to a background thread while trying to ensure the user interface or other input queues are kept responsive to new requests.
  • remote processing – enabling multiple applications to make service requests or share data.  RPC, CORBA, OLE, DDE, and JAVA RMI are a few examples of remote processing technologies.  Remote processing does not require, and is not restricted to, applications running on multiple physical servers in multiple locations.  It can, and most often does, occur between applications running on a single host computer within a single operating system instance.  A very common example of “remote processing” happening on a local computer would be using the MS Outlook application and selecting the “View Messages in Word” option.  The Outlook app invokes the Word app, sends it data, and sends it instructions on what to do.
  • asynchronous processing and communications – in an asynchronous process, components can work independently of each other.  For software applications, this usually involves some optimization of logic for queuing up multithreaded workloads and handling results.  For communications I/O (whether disk, memory, network, etc) this usually starts with increasing available channels so transmit and receive operations can occur simultaneously without collisions; next would be optimizing the distribution of activity across available channels.

Advancements in hardware technologies provided software engineers many reasons to rewrite and update their products.  Marketing’s demands for “IP Enabled” fit in nicely with these other priorities.  The engineers were also enamored with Internet communications and liked the idea of supporting fewer communications protocols.

  • At the time I met a lot of software engineers who had little or no idea of the size and complexity of the “TCP/IP Suite” which already existed in the mid 90s.  Even fewer could foresee the explosion of “protocol enhancements” which would follow.  Of the software engineers who actually create protocol implementations, many happily left IPX, SNA, NetBios, AppleTalk, and others in the dustbin of history… but I doubt you’ll find many who’d say life has gotten simpler since then.
  • IANA maintains a port numbering scheme for TCP and UDP protocols (mostly those which have been recognized thru the IETF RFC process).  At this time there are about 1,200 TCP/UDP protocols identified by the IANA registry.  Even within an “IP only” environment, this # is just a subset of the protocols available in the seven layer OSI stack.

What started for many product engineers (software and hardware) as an effort to make products compatible with IP soon became a very public optimization contest for vendors and their marketing organizations.

The race resulted in making IP the default protocol for inter-process communications and even intra-process communications… the birth of the “IP Enabled Operating System Kernel”.

The IP Enabled Kernel actually has two key characteristics.  Some instances may only have one of these, but many now have both.

The first characteristic could be describe as “optimization by inclusion”… or you might call it “kitchen sink compiling”.  Many of the networking functions which were previous performed by software modules external to the kernel were compiled into the kernel’s source code.  By doing this, the kernel and networking feature share the same physical memory space.  When the network function lived in a separate process, the kernel would need to physically copy data out to a new memory location which the network function could access.  When the two are combined, they can pass pointers to physical memory.  The result is a dramatic speed increase and reduction in I/O.

Image the user wants to send a local file from disk to a network location, but is using a computer system where everything is strictly separated into different application processes.  The System Kernel is in process #0.  The user is currently running application process #1.  The user request causes a file manager to be invoked in process #2.  And a network stack needs to be invoked in process #3.

In a well designed / optimized system, the file could be read directly from disk to the buffers of the network interface.  Proper process boundaries and virtual memory address management provide by the Kernel would prevent the User App, File Manager App, and Network App from knowing anything about each other or the Kernel… and the user’s request would be performed with a minimum of system resources.

Unfortunately, most systems today still aren’t that well designed.  A more common result was for the I/O to occur multiple times as the data traversed the various processes.  Or in even worse circumstances, the physical memory pointers were passed to all of  processes interested in this information and a bug in one would bring everything to a crash.

The “optimization by inclusion” approach has resulted in many of these functions being compiled into the OS Kernel [or into DLLs which are loaded into the kernel as the systems boots… with pretty much the same run time result].

The second characteristic of the IP Enabled Kernel could be described as “process optimization”.  This approach does several things:

  • organizes the application (process) logic as close to the OSI Layers as practical
  • arranges I/O and data chunks into sizes and patterns which are optimized for encapsulation within IP packets.  Network IP Interfaces have a setting called MTU (Maximum Transmission Unit).  If a Kernel process is handling some data which might eventually be sent to a network interface, passing that data around in chunks which fit perfectly into the MTU would be a potential optimization.
  • prefers and implements IP Protocols for inter- (and sometimes even intra- ) process communications.  This is one of the uses for the Loopback Address of 127.0.0.1

Over time, IP capabilities were made native to more and more OS components, system libraries, and app frameworks.

Today we’ve reach a point where vendors are trying to IP Enable our kitchen appliances.

* Actually some of the vendors tried this in the 90s, but the Utilities ignored them and many of the rest of us laughed at them.  Today the vendors are trying it again and people are starting to buy into it.  In some cases utilities have deployed smart grid products which unintentionally introduced IP capabilities on what were thought to be private, non-IP networks (both wireless and wireless).  NERC has begun intervening and requiring stricter technologies standards and security procedures for the utility industry.

I believe we’ve already went to far.

I’m not a security by obscurity fan who wants some mysterious black box kernel setting at the heart of my technology products.

Nor am I some sort of closet luddite who wants to shut down the internet.  I like shopping online, using electronic bill pay with automatic bookkeeping and no stamp licking, and digital media.

But I do think it’s time we seriously consider going to back to core components which don’t have native Internet capability.  Technology has reached the point where the potential workload from using multi-protocol gateway applications no longer presents a performance problem.

Firewalls and anti-malware tools have become de facto system requirements for everything.  IE., we’re already running the workload attempting to monitor IP-to-IP communications.  If we stopped allowing every little app, gadget, widget, process, and thread access to every feature of the IP Stack known to man, we could actually reduce the Firewall/Anti-Malware workload on our systems and achieve a higher level of confidence in monitoring being effective.

Memory virtualization and address randomization have evolved to the point where I/O can be optimized while still preventing processes which share data from knowing about each other or interfering with each other.

There’s no reason for an application to have Internet communications capability without expressively asking permission to load and utilize an appropriate framework.  At application run time the user / device owner should have the option of denying the application that capability when desired.

Security issues would improve with systems which:

  • Move network interfaces and protocols out of the OS Kernel.
    • Use a non-kernel process to access network interfaces.
    • Use non-IP protocols for inter- and intra- process communications.  When a process (even a kernel process) needs network services, require it to request permissions and translation services thru a non-root gateway.
  • The entire network communication stack should be moved to a “multi-protocol router and stateful application firewall service” running under a non root account.
    • One place to enable/disable communication services.
    • One place to monitor communications services.
    • But not an all-or-nothing architecture.  It should be easy to control which protocols are enabled or disabled.  Same with apps.  And same with inter-process/service communication.
  • These aren’t concepts which require a lot of “start-from-scratch” efforts to realize.  The application logic already exists.  We created the DEN and CIM specifications back in the late 90s specifically to provide an industry standard way of managing relationships between people, devices, applications, and services.  
  • In high security environments, this architecture is the required default.  It’s usually achieved thru a combination of OS Hardening and 3rd party security products.  The hardening process removes unnecessary packages from the system, restricts communications capabilities to specific services, and forces communications to pass thru the 3rd party security product for evaluation.
  • Bluetooth devices are for personal area networks.  They don’t need a publicly routable IP Enabled network stack.
    • Nor do my USB, Firewire, Audio and HDMI interfaces!
  • Start applications in a ‘least privilege’ mode and allow the user / device owner to approve activation of features.  If the app doesn’t work, or fail gracefully, in least privilege mode it shouldn’t pass QA.  [And the operating system shouldn’t let it run without a user override.]
  • The Apple iOS Privacy Settings panel demonstrates a good concept, that could be improved.  Important services, features, etc., which have privacy/security concerns should be isolated to specific Libraries and Frameworks.  Operating systems should provide users / device owners a mechanism to enable or disable entire frameworks as they choose.
    • Organizations with high security requirements have been playing whack-a-mole with Mobile Device Vendors over features like cameras, microphones, location tracking, and more.  While some organizations have had small successes getting policy management points built into Mobile Device Manager (MDM) products and Mobile Operating Systems… consumers have been left with little to no idea what their devices are doing or capable of doing.
    • New features should be linked to a framework and privacy control mechanism before the features GA release.

These issues don’t apply just to Smartphones, laptops, and other typical IT products.  These issues are just as important for automobiles, appliances, electronic healthcare products, home automation products, industrial robots, the emerging market of home assistive / personal robotics products, and any other new fangled gadgets coming along with abilities to store, process, or communicate information.

Some time ago, a TED talk described a “moral operating system”.  The speaker was describing the need for a system of morality for people… but I tend to take things literally, and kept returning to the idea of, “how could improve computer operating systems to facilitate these ideas?”

The obvious first step has been know for years.  Design systems so the default choice is typically the better choice.

Another requirement for this new operating system.  It needs to begin with the principle that everything on the disk / storage media belongs to the user or device owner.  It’s my information and I have a right to see it when I want to look at it.  It’s my information and I have a right to monitor which applications or processes have been accessing or  modifying it.  And I have a right to restrict which applications or processes can access my information on the disk or storage media.

I’m not daft.  I realize DRM isn’t going away anytime soon.  And I’m not here to argue over which DRM system, if any, is better than the other.  I believe an inherently secure, user-centric operating system can still accommodate a DRM’d service by:

  • giving me the choice to delegate  control of a storage location and control of an application sub process to the DRM service.
  • the delegated storage location could be an external media device I choose to dedicate to the service or, more likely, be an encrypted sparse disk image I choose to allow the service to create (at a file location of my choosing).
  • the delegated application “sub process” would likely be some sort of “certificate management” utility which kept the keys to the delegated storage location.
  • so long as I permit the “sub process” to run and don’t tamper with it, it would be able to verify it’s code signature and verify it’s certificates to provide sufficient assurance to the DRM’d content provider I’m following the terms of our agreement.
  • The DRM service should have absolutely no reach or influence within my computer system beyonds it’s application sandbox, it’s delegated sub process, and it’s delegated storage process.
  • If I wish to stop or delete the service, it should be as simple as exiting or deleting the application.  The only negative consequence should be loosing the ability to read the contents of the encrypted delegated storage area.  Deleting that storage remains my decision, and so does the option of re-installing the App to restore access to the DRM’d media.

In addition to the Virtual Memory Addressing, Memory Address Randomization, and Memory Encryption architectures which have been implemented for computer RAM… I’d also like to see similar architectural changes for how applications are allowed to interact with the file system.

For example, some features might include:

  • Restrict sandboxed applications to a virtual file system using encryption and address randomization instead of allowing the application access to any part of the real file system.
  • Give the user controls to provide an application with access to “the file system framework” so it can interact with things outside it’s sandbox.  Include some granular choices such as file, directory, or “other app’s data”… with standard file permission options still available also.
  • Just as it may be reasonable to expect an application to ask permission to use a NetworkFramework to communicate outside of it’s sandbox, it should also be reasonable for an application to need permissions for a FileSystemFramework before interacting with data/media outside of it’s sandbox.

Again, to summarize in short form, these few key changes could improve the inherent security of many computer / electronic products:

  • Take the network interfaces out of the OS Kernel.
  • Take the network protocols out of the OS Kernel.
  • Direct all network communications thru a non-root multi-protocol router and stateful packet inspection service.
  • Wrap major product features in a system framework and give the user control over whether that framework is accessible on their device, and by which apps/services if they choose to enable.
  • Always start things in least privilege mode until the owner approves more access.
  • Always start from a place which acknowledges the user’s ownership of information and preserve the user’s ownership and rights.
    • Only the owner can choose to delegate control [the operating system and 3rd party applications cannot arbitrarily grant themselves control over the user’s information].
    • Only the owner can choose to provide access to information.
    • And, only the owner can choose to disclose information.

In real, day to day terms… these architectural changes would not require large shifts in the way most developers and engineer go about building their products.  Very few software engineers actually write protocol stacks, kernels, or system frameworks.  For everyone else writing software, the difference between including a framework in your application vs “getting it for free” from the operating system can be as simple as a checkbox or a “# Include”.

The biggest effort, and most important work, is for the kernel and framework developers to adopt architectures which default to inherently safe security configurations and give users control over whether frameworks/features are enabled.

The Linux and Unix communities already have secure OS implementations which achieve some of these goals.  Apple, Oracle, Redhat, and Novell all share some responsibilities for completing the architecture and making it standard in their products.  Microsoft probably has the most baggage to overcome.

Many others in the IT Industry also share responsibilities in making these sort of changes.  Nokia, Siemens, Samsung, Blackberry, Google/Motorola, HP, IBM, and Cisco all need to step up.

Some, such as Symantec, stand to loose some market share if OS Vendors finally step up and fulfill their responsibilities.

Intel, AMD, Motorola, Qualcomm, and TI all have a stake in this as well. Intel is easily in the leadership position right now, since their acquisition of McAfee was explained as being done for the express purpose of introducing more security capabilities directly into the CPU and reducing the need for complex 3rd party products to be loaded by the customer after the system purchase.

Listing the CPU manufacturers brings me to my final point for the security architecture recommendations.  To some extent, this one is mostly on the CPU makers, but coordinating with the OS makers will help.

Enough with the kitchen sink “system on chip” approach.  Yeah, it’s a great idea.  But overdoing it is like combing a Super-Walmart, a Cabelas, a college dorm complex, a hospital, and super-max prison all into a Seven Eleven.  Who decided the most trusted processes and the least trusted processes should run on the same chip?  The $99 smartphone of today provides as much or more processing capability as $200,000 systems available at the time many of these architectural decisions were made.  Inertia has kept us on course.  It’s time to reconsider old design decisions.

After decades of watching more capabilities be combined into a single chip, I’m no longer convinced it’s the great idea it started out to be.  Keep making things more energy efficient, smaller, lighter, and faster.  But consider backing off the physical co-mingling of chip capabilities.  Consider fencing some of these untrusted communication services off to a component chip and working with the Secure OS makers to build good gateway processes and frameworks for controlling the flow of data.

As for multi-core CPUs… in consumer devices, I still haven’t seen many examples of workloads (applications) which can properly utilize four or more cores… and I’ve seen even fewer examples of consumer workloads which actually need to do so for more than a fraction of a second.  Very few consumers run video rendering processes, and even fewer run multiple virtual machines in a continuous build development process.

On the other hand, I believe a currently underdeveloped chip feature which could provide immediate benefits to consumer and business markets combines secure I/O and secure storage.  In our laptops and smartphones, and other similar devices, our media libraries (video, audio, photos) have grown large, but much of our critical personal information fits within just a few megabytes up to a couple GB for those who have been paperless longer.  CPU and OS makers should look at implementing physical and logical pathways dedicated to providing the user with a secure data vault.  It could utilize any number of different implementation strategies.  Some Flash on the motherboard, a CPU pathway available to a specific USB/MicroSD slot or to an optional region/address within an SSD, or something we haven’t even thought of yet.  Whatever it ends up to be, just make sure it provides the user with a means of vaulting relatively small amounts of critical data away from their primary storage disk in some way that allowing an app generic/general file system access would still be physically and logically isolated from the vault.  The best explanation of my interest here may be Keychain on steroids… put the data at a location physically separate from the regular storage disk, use a different file system, different encryption protocol and key, implement a coordinated CPU and OS architecture which requires all access to the vault be shunted thru specific/dedicated frameworks and gateway services (ie., not directly accessible to regular OS and App process).

Personally, I have three applications which I use in a “data vault” fashion and would benefit from this architecture.  I only run them occasionally, when I need them.  They already have extra security controls invoked with launching the apps.  Placing the app data into a secure vault protected by physical and logical separations including a security framework and control gateway would both improve and simply the scenario I currently use. There is a market for this kind of functionality.  If there wasn’t, products like 1Password and RSA SecureID would not exist.

In product areas outside traditional tech markets, vendors are already running into challenges to the System-On-a-Chip (SOC) trend. NERC recently began requiring smart-grid device makers producing residential smart meters to separate functionality into at least three physically separate portions within the device.  One trusted portion of the devices are required to undergo extensive certification testing (every version of hardware and everyone version of software, even minor updates).  This portion would be allowed to communicate with utility grid control systems.  A second, less trusted, and optional, portion could be implementing for local maintenance access (local/downstream only, no grid/upstream access).  A third, and mostly untrusted, portion is provided for consumer facing services which have the high risk issues of being available to the consumers home network and also getting frequent software updates as consumer features are continuously developed.  Overuse of SOC architectures compromised the entire smart grid.  This mandatory chip/feature segregation is critical to the utility industry and provides benefits none of the other proposed architectures can match.

Automotive, aviation, and many other industry segments have similar requirements for architectural physical separation of features.  We need to recognize the value of dis-integration in consumer products as well.

It’s almost ironic that much of the IT industry has been loudly espousing the benefits of loose coupling and dis-integration for a decade or more.  But yet most of the industry overlooked the increasingly tight coupling between Operating Systems and Network Stacks.

Adopting a new operating system always involves a learning curve.  But I look forward to learning a new modern and inherently secure OS that doesn’t have built in IP support.

And if anyone expects to sell me a self driving vehicle or a personal robot (next year or 20 years from now), consider this early notice of my #1 priority.  An inherently secure design.

In fact, for cars and robots… the world would probably be better off if the communications capabilities were removed to physically separate chips and I/O pathways between the CPU and CommChips controlled by physical switches or keys.  Turning off the switch or removing a key should permit the device to otherwise operate normally… just prevent it from getting new instructions from the neighbor kid while we’re sleeping or gone fishing.

Xcode unable to create snapshot: unable to write to file … plist

Xcode error:  Unable to create snapshot  Unable to write to info file ‘<DVTFilePath: ~~~~~ info.plist’>’.

Xcode has been tossing this error at me for over a year now.  This problem has persisted across multiple versions of Xcode and OS X.  Whenever I open an older project (from a previous version of Xcode), it often suggests some updates to the project and offers to create a snapshot of the project before continuing.  It always fails with an error similar to this.  Manually initiating an Xcode snapshot from menu FILE | Create Snapshot results in the same error.

It hasn’t been real issue for me, I use GIT for version control and have never relied on Xcode snapshots.  It’s only been a minor nuisance, but I’m procrastinating on some other work and this seems like a good time to investigate the problem.

Xcode unable to create snapshot

Quite a few web posts indicate it has something to do with corrupted .DS_Store files and/or conflicts with GIT and the location of the .git folder relative to the Xcode project.

To test those ideas, I tried deleting those files… it didn’t work.  Also tried creating a new project without using GIT at all.  It still failed with the error.

Creating a brand new directory, verifying permissions, and changing Xcode preferences to use that directory for snapshots didn’t work either.

For now, I’m stumped on this one.  Next step may be to try running clean copy of Xcode and OS X in a VM and see if the problem shows up there.  Maybe I’ll find something later… if I do, I’ll post an update.

Xcode 4.4.1 update is 47.48MB from app store.

Today’s new Xcode update 4.4.1 (for production usage) comes as a 47.48MB update from the Mac App Store.

After the update completes, checking the Xcode preferences for components and documentation indicated “Command Line Tools (143MB)” was the only portion needed additional updating at this time.  It appears the simulators and documentation did not change. 

Install CentOS 6.3 64-bit Linux in VMware Fusion 4.1.3

As part of my iOS app development and testing lab, I have a need to be able to test client applications against multi-platform database services.  Last year I determined a collection of OS X, CentOS, and openSUSE virtual machines running MySQL and PostgreSQL provides an adequately diverse test environment for my needs.

A wide variety of application prototyping and testing needs can be served by these combinations without requiring a rack of high end hardware and a couple full time DBAs to maintain everything.  Many of my clients have performance testing and production requirements far beyond my little “proof of concept” setup.  However, my “proof of concept” environment often helps me better understand how to communicate with the DBAs in the large organizations.  And sometimes it allows testing ideas that they don’t have the luxury of trying out on a $25 Million production database cluster.

My virtual lab environment had grown a bit stale over the past year.  Over the past week or so, I’ve been updating to OS X 10.8 Mountain Lion and both the Xcode 4.4 general release and the Xcode 4.5 iOS 6 betas.  Now I’m beginning to update the Linux and SQL components of the environment.  I’ve had a long affinity for Suse Linux so I like to keep a familiar distro on hand.  Many clients are using Redhat in their production environments, so CentOS has become a necessity.  In the past, Solaris was always a key component of my setups but not so much any more; adding some new Solaris VMs will be deferred for another time.

For this portion of the lab update I’ll be building a couple new CentOS VMs and keeping some notes.  I’ll begin with the CentOS 6.3 x86-64 bit “netinstall.iso“.  Assuming you’re install to location with internet connectivity (and not organizationally firewalled into using sneakernet for your lab), the netinstall.iso option saves the time otherwise spent updating all of the packages in the LiveCD or full ISO images.

In VMware Fusion 4.1.3,

  • select the menu options “File” and “New” to get the “Create New Virtual Machine” dialog window.
  • select “Continue without Disk
  • select “Choose a disk or disk image…
  • use the presented Finder popup to navigate to your target ISO image (which you’ve previously downloaded) and select “Continue“.
  • select Operating System: “Linux
  • select Version: “CentOS 64-bit
  • select “Continue”  note: the OS and Version selections are important as they inform VMware Fusion which drivers, VMtools, and VM configuration settings to utilize.  VMs can successfully be created using less specific settings, by you’d lose out on some features of Fusion and likely have to perform additional manually configuration work within your Linux VM.
  • you should be presented with a summary configuration of your new VM with the options to “Customize Settings” or “Finish“.  This default will likely be one processor core and 1GB memory; I recommended increasing this to two cores and 2GB memory.  After completing the installation and configuration, you might try lowering the settings but these will be helpful for getting thru the various package installations and configurations.
  • select “Finish” and use the Finder popup to name and save your new VM.  I like to configure a “base image” to my preferences and then make copies of as needed for testing new configurations or loading additional packages.  So it’s helpful to think of a naming convention if you are likely to have multiple copies over time.
  • Fusion will start the new VM and the netinstall.iso will boot to a setup process.  Netinstall will be a text based interface (use your keyboard arrows keys to move between options).  The first dialog will be for testing the installation media.  I’ll “Skip” the media test.  If you uncertain about where you image came from or the quality of your internet connection, you may want to let the me test proceed.
  • choose a language
  • choose a keyboard type
  • choose an installation method.  select “URL“. (you’ll be prompted for details later).
  • configure TCP/IP.  unless you need to change, accept the defaults by selecting “OK“.
  • a dialog will display “waiting for network manager to configure eth0
  • URL setup.  enter “http://mirror.centos.org/centos/6.3/os/x86_64&#8243;.  The text interface does not allow copy/paste from the host, so you will need to type this in exactly.  cents.org redirectors the download to one of many mirror sites.  If the URL doesn’t work for you, check your typing and try again.  It’s possible the redirection could get sent to a server that is temporarily busy or offline.  Trying again usually works.  If not, you’ll need to do some searching to locate a direct URL to mirror server that is reachable from your network location.
  • After the netinstall process begins, in a few moments you’ll see a graphical screen displaying a CentOS 6 logo.  Select “Next“.
  • Basic storage device should be ok. Select “Next“.
  • Storage Device Warning.  This is a fresh install, so select “Yes, discard any data“.
  • local hostname:  Enter a hostname for your VM.
  • select a timezone.
  • enter a root password (twice to confirm, must be at least six characters).
  • which type of installation would you like?  select “use all space“.
  • write changes to disk
  • select optional software to install.  Note:  Selecting software packages is a lot easier if you wait until the system is up and running with VMtools providing proper mouse and video drivers plus the ability to select the various package repositories you’ll want to use.  So, for this step,  select “Minimal Desktop” and “Next“.  If you choose the “Minimal” option, you’ll be limited to the command line.
  • The necessary packages will be downloaded and installed (about 30 minutes of this older Core 2 Duo MacMini).  When it’s complete, you’ll be prompted to “Reboot“.
  • After the reboot, a Welcome screen will continue the process of setting up the new system.  Select “Forward“.
  • Agree to the license and select “Forward“.
  • Create User: input your desired user information. Select “Forward“.
  • Set Date and Time. Select “Forward“.
  • At this point I get a warning message “Insufficient memory to auto-enable dump. …” That’s ok, I don’t need it for this usage, so I’ll select “Ok” and “Finish“.  The VM will reboot to complete the setup.
  • After the reboot, a GUI login screen will prompt to login with the account just created in the previous steps and delivery you to the new desktop.

At this point the new VM is is ready to use with a base configuration of the “Minimal Desktop” distribution of CentOS v6.3.  However, there are some additional steps to make it bit more user friendly prior to archiving a copy and proceeding with the desired dev / test work this VM is intended for.

  • Use the VMware Fusion menu to select “Virtual Machine | Install VMware Tools“.  If you’ve not used previously used this feature in your current version / installation of VMware Fusion, you’ll be prompted that “VMware Fusion needs to download the following component: VMware Tools for Linux“.  Select “Download“.
  • VMware Fusion will be adding an additional component to the Fusion application on your Mac OS X host, so you will be prompted to authenticate and permit this action.
  • Next you’ll be prompted by Fusion to “Click Install to connect the VMware Tools installer CD to this virtual machine“.
  • This should result in the CentOS VM’s desktop displaying a DVD (or CD) icon titled “VMware Tools”.  Unfortunately, mine displayed a blank folder with an empty disk as a result.
    • Checking /Applications/VMware Fusion.app/Contents/Library/isoimages” confirmed that a “linux.iso” file was present (dated 2012-05-27).
    • Rebooting the VM and re-trying the VMtools installation still resulted in an empty disc image / folder.  This is a common problem between Fusion and many Linux distributions.  VMware’s support forums offer several work arounds, most of them at the command line.
  • My solution is to use the OS X Finder to browse the “VMware Fusion.app” package contents, copy the “linux.iso” to another folder, and mount it to the VM’s CD drive.
  • Return to the CentOS desktop, use “Computer” to browse the CD.  You should now see a “VMware-Tools……tar.gz” file.
  • Drag the “….tar.gz” file to your home folder.  Don’t bother trying right click and select “Open with archive mounter”. Extracting the files through the GUI will probably result in a process that estimates a couple hours to complete.
  • Use the CentOS “Applications” menu to launch “Terminal“.
  • “CD” to your home folder.
  • Use the “ls” command to verify the “…tar.gz” file is there.
  • Expand the archive using “tar zxpf VMwareTools-….tar.gz”  HINT: type “tar zxpf VMw” and hit “Tab” to autocomplete the command.
    • This should result in a new folder named “vmware-tools-distrib” containing 3,275 items for 178.6MB.
  • In terminal, type “CD vm” and hit “Tab” (to autocomplete).
  • Another “ls” command should verify the presence of “vmware-install.pl”.
  • You’ll need super user (root) privileges to run this script.  Type “su” and then enter the root password established during installation.
  • Enter “./vmware-install.pl” (or just type “./v” followed with a tab key to autocomplete).
  • The script will prompt with about nine questions.  Use “Enter” to accept the defaults for each.
  • When the script completes you can delete the “…tar.gz” from the VM to save diskspace.  In all likelihood, if you ever need them again for this specific VM, they’ll be out of date by then.  Reboot the VM to activate the VMware Tools features.

Now that VMware Tools is active the mouse should work much better, and you’ll be able to resize the VM window to whatever fits on your available host machines OS X desktop the best for your preferences. Copy/paste from the host machine should be enable.

VMware Fusion shared folders should also be working now.  However, you should verify as this is another feature where Fusion yields different results across various Linux distributions.  On this particular CentOS VM, sharing some folders from the host machine resulted in them be available within CentOS at the path “/mnt/hgfs/”.  Fortunately it wasn’t necessary to perform any additional commands to use them.  A quick test confirmed the shared path was readable and writeable from the VM.  note: this feature mounts the shared folders with the guest VM as virtual file system, there isn’t any shared/virtual networking going on with this feature.  

The next step I recommend is selecting the Applications menu “System | Software Update“.  Despite having just completed a network installation, this new instance of CentOS Minimal Desktop config had 43 available updates (124.6MB).  The update process will prompt for the root password.  You will also likely be prompted to authenticate to accept certificates, signatures, and various packages during the update process (so it’s not a walk away and leave it process).

Now that the base config is installed and updated, I’ll shut down the VM and make a Zip Archive (using OS X Finder) of it’s VM image.

It was about 2.5 hours to get this far.  A quad core host machine with SSD, and a faster internet connection, would reduce that considerably.  Some of the time was also spent writing these notes.

With this new configuration built and a backup tucked away, I probably won’t need to perform a base install of CentOS in this environment for another year.  I didn’t keep as much detail last time, so I’ll have to wait another year to compare whether things get faster.

My next steps for CentOS will be to configure the various application packages and settings that I need (and make another Zip Archive backup).  From there it is much faster to deploy additional instances for dev/test work whenever needed.

Testing “Xcode 4.5 and iOS 6 SDK beta 3” using a virtual machine instance of OS X Lion

For a couple weeks now, I’ve been using Xcode 4.3.3 and the iOS 5.1 SDK on a mid 2012 MacBook Air 13″ with 8GB Ram.  It’s very nice.

With the July 16th update to the iOS 6 development betas, it was time test running the new Xcode environment under a VM on the MacBook Air.  The first step in the process was to get a virtual instance of OS X Lion 10.7 running under VMware Fusion.

I’ve done this before, but the new 2012 MacBook CPU (Intel Ivy Bridge) caused a “CPU disabled by guest operating system… ” error under Fusion.  The solution was to add this line to the *.vmx config file of the target VM.

      cpuid.1.eax = “—-:—-:—-:0010:—-:—-:1010:0111”

VMware should have a 2012 update to Fusion for OS X Mountain Lion 10.8, they are currently testing it as a “technical preview”.  This post provides more information on the error and it’s solution.

With that problem solved, it was time to get Xcode 4.5 beta 3 up and running.  Right after installing VMware Tools and configuration some OS X settings to my preferences (I didn’t use Migration Assistant for this VM as I wanted a fresh environment).

The next issue was with the Xcode 4.5 app.  It would not run.  I used Lion 10.7.3 to create the VM the new beta requires a minimum of 10.7.4.  Using software updates to get 10.7.4, iTunes, and Safari updates downloaded about 1GB.  After the updates, the Xcode 4.5 beta is now able to run.  This is a good place to make a backup of the VMDK and save for future use.

VMware snapshots or Fusion Time Machine integration are both good features, but I prefer to locate the *.vmwarevm file (package) in Finder and copy to a compressed zip file.  I’ll use this zip as a clean start for additional beta releases as well as some OSX Server testing.  Will also use it to testing the Mountain Lion upgrade.

After installing Xcode, you’ll most likely want the ability to do something with it.  This entails installing some “core libraries”.  From within Xcode Preferences, the Downloads tab provides access to additional Components and Documentation. Plan for another GB or more of downloads.

If you’re setting up your virtual dev/test environment for first time, plan on 4 or 5 hours and several GB of downloads/updates during the process.  After that you’ll be able to test beta releases or do other experimental work in a VM (with USB access to physical devices if desired) without affecting any of the apps of your host Mac.

Installing OSX Lion into VMware Fusion on Macbook 2012 gets “cpu disabled by guest operating system” error.

In order to install OS X Lion into a virtual machine running under VMware Fusion, you need the install file from the Mac App store.  In this case, I started with:

  • the Mac App store file “Install Max OS X Lion_10.7.3.app”
  • VMware Fusion 4.1.1
  • Macbook Air 13″ mid-2012

I’ve run OS X Lion under VMs previously, so expected this should work without any difficulties.  I expected wrong.

Attempting to start a new VM resulted in a Fusion error message stating, “The CPU has been disabled by the guest operating system…”

To troubleshoot I:

  1.  started by checking for Fusion updates; the in app update check didn’t show any available updates.
  2. Next I decided to try a reinstall of Fusion.  Deleted the app, rebooted, and went to vmware.com to download a fresh copy.  Found a newer version 4.1.3  Trying this version resulted in same CPU error.
  3. Did some additional searching and found a vmware forum thread which referred to a work around listed in another vmware forum thread.

Here’s a summary of the solution to save the time of going through all of the forum thread references.

The physical Intel CPU in the mid-2012 MacBooks is new.  As a result, if you are using a Mac App store installation file for OS X obtained prior to the 2012 MacBooks, that version won’t understand the new processor.

The solution is to edit the configuration file of the OS X Lion virtual machine to add this entry

cpuid.1.eax = “—-:—-:—-:0010:—-:—-:1010:0111”

The configuration file will be located within the actual VM storage file.  You can use Finder to location the *.vmwarevm file, then right click to “Show Package Contents”.  The config file will be the *.vmx

Any easier way to open the VM’s configuration file is using the VMware Fusion Virtual Machine Library window.  Use the OPTION key + Right Click on the target VM.  An option to “Open Config File in Editor” will be available.

This solution should work on the following combinations of hardware software:

  • All mid-2012 MacBooks
  • VMware Fusion 4.1.3
  • OS X host operating system version Lion 10.7.4
  • OS X guest operating system versions Lion 10.7.3 or 10.7.4

VMware is working on a “Technology Preview 2012″ for OS X Mountain Lion 10.8  As of this writing the workaround for that version is different.    Hopefully VMware will clean this up prior to release the 2012 version.

If Apple releases a Mac App store installation package of Lion 10.7.5, that may also solve the VM CPU configuration issue.

Running OS X Lion in virtual machines is my preferred method of testing new versions of the iOS SDK and Apple’s iOS Device Management tools.  With a recent move to a 2012 MacBook Air 13” and the developer release of iOS v6, apparently it was time to encounter a new collection of configuration issues.

iPads/iPhones in government and military.

Here are some links to additional information about using iPhones/iPad in the government and the military.  Might give some additional ideas about what’s possible with the iPhones and iPads.

Apple ID account management – password resets, purchase history, iCloud, etc

Unfortunately Apple still hasn’t provided a “one stop shop” for managing all aspects of an Apple ID and a customer’s relationship with Apple.

Personally, I find myself periodically needing to review or update account related information in up to five different places.  Here’s a summary of what’s in each area and how to get there quickly:

note: this article assumes you already have these accounts are only provides a quick refresher on how to navigate back to various areas to update or verify things.

1.  The Apple ID:  your Apple ID is the root anchor of your relationship with Apple.  There are numerous paths you can navigate to access this information, but the simplest seems to be visiting this URL from a web browser:  appleid.apple.com

From this location, you can manage your password, the email address for your account, and your contact information.  For most websites (where the relationship is much less involved), I usually stuff these data fields with bogus information.  However, I do purchases things from Apple and they use the information here to as part of that purchase process.  So it becomes necessary to enter correct information.

note:  If you’ve grown tired of receiving the Apple emails each week about the latest thing they have for sell, the “Language and Contacts Preferences” is the location to turn those off.  A portion of the URL for these settings is automatically generated during each login session, so I cannot provide a direct link.

2. The iTunes Account:  your iTunes account is intimately linked with your Apple ID, but to manage the additional account information, the best location is within the iTunes desktop application.  (You can also do this from an iOS device, I’ll cover that what’s possible there and how to do it in another article.)

Within the iTunes desktop application, navigate the iTunes Store (from the list of things in the left side navigation bar – usually just below your Library).  Assuming you’re logged in, the upper right corner of the iTunes window should show your Apple ID (email address).

Placing your mouse/cursor at the end of the email address provides a drop down menu.  Select “Account“.

From this area, you can manage:

  • payment information
  • computer authorizations
  • iTunes in the Cloud devices
  • purchase history
  • Ping (if you use that)
  • and some additional Settings

Some of my iTunes transactions are business related, so the purchase history is very helpful for retrieve receipt information for my reporting needs.

Additionally, the “iTunes in the Cloud: View Hidden Purchases” is helpful now that iTunes: Purchased allows you to hide previous items from display.

3. The iCloud Account: Most of iCloud is best managed from an iOS device.  If you login to the web interface at http://www.icloud.com the primary management feature is an option to reset the photo stream.  Click your user name in the upper right corner, and select Advanced from the pop up menu.  For now, Reset Photo Stream is the only option presented.  There is also an URL Link to the Apple ID account management web page described above.

To manage you iCloud Account from an iPad:  start by launching the Settings icon and find the menu option for iCloud.  At the top of the detail view window, select Account (it already should be displaying the email address for your iCloud account).

From here you can manage your iCloud password, your Storage Plan, and (if applicable) your iCloud Payment information.

For most folks, this will be the same payment information as your Apple ID above (you’ll be asked to authenticate with that Apple ID login).  However, some us who had previous .MAC accounts have ended up with two Apple IDs… and won’t necessarily have the same login or payment information as our primary Apple ID.  Although it seems to work ok most of the time, some day “It’s complicated”.

While you’re in the Settings | iCloud menu window, you can also turn various features on or off.  And you can use the Storage & Backup selection to view numerous options.

If you’ve elected to utilize the iCloud backup feature for your iOS device, this is where you find the option to omit various applications from being backed up.  This is particularly helpful if you’re trying to stay within the free 5GB quota.

4. The Apple Store Account:  This is the account for purchasing physical products.  From a web browser, visit this URL:  http://store.apple.com/us/account/home

The primary things to do here are tracking orders and viewing previous order history.  Unlike Amazon, Apple only provides 18 months of order history.  So if you need to reprint invoices for tax receipts or such, don’t delay to long.

These pages also link to the information for your Apple ID.

5. The MobileME Account:  Although I’ve migrated from MobileME to the iCloud service, I still have access to the remainder of my iDisk service subscription.  I’d like to hope that Apple will provide an iCloud equivalent before they completely turn down the MobileMe iDisk service; but I’m not really expecting them to.

Apple has set June 30, 2012 as the last day for the MobileMe services.  I’ve already moved my web hosting and pictures from the service.  And only use iDisk for limited (and short term) things at this point.

Within the MobileMe web interface, move your mouse over your user name (in the upper right hand corner) and click on Account to view settings, options, and account information.  If you still have data, photos, or web pages in MobileMe, it’s time to start finding a new home for them.

Footnote:  When I started writing this note, I thought it would be a short reference containing links to the Account tools for Apple ID, iTunes, and iCloud.  As I verified everything, the note continued to grow and grow.  I really hope someone from Apple is paying attention this problem and working on a solution to simplify how we maintain our relationships with their products and services.

It’s slightly ironic that I need more management interfaces for my Apple account than I need remote controls for my home theatre setup.