Situations where JIRA doesn’t meet the needs of a project (JRA-846).

When evaluating tools like JIRA, HP ALM, or IBM Rational, it’s important evaluate project needs vs product capabilities.  Obviously the costs of getting started with JIRA are much lower than some alternatives.  But sometimes, being penny-wise can result in being pound-foolish.

For a simple “MVC” type application with a limited set of components, it’s likely JIRA’s features will be adequate. Or project needs can be met with some minor customizations and/or plugins.

However, when managing ongoing development of systems which contain many levels of hierarchical components, the JIRA limitations may present significant obstacles.  For many years, there have been open feature requests regarding support for hierarchies.  As of March 4, 2014, JIRA’s response is that it will be another 12 months before they “fit this into their roadmap”.

Jira JRA-846 Support for subcomponents

For large distributed systems, with complex dependencies, this presents a significant challenge.

While setting up a new JIRA/Atlassian environment for a solution comprised of 8 major applications, I’ve found that it is not possible to create a hierarchy of subcomponents.  Nor is it possible to establish versioning for those subcomponents.  Instead, the JIRA data model and workflows are designed for all components of a project to exist as a flat list.  And for all components to be on the same version / release cycle.

For our solution, many of the major applications start with a commercial product, incorporate multiple modules, integrate an SDK, integrate 3rd Party plugins, and finish with custom coding of multiple subcomponents.  The design pattern is to establish interface boundaries, decouple the components, and enable components to be updated independently (some people call this SOA).

Now I’m am getting a clearer picture of when it is time to consider alternatives such as HP ALM or IBM Rational.  In the past, I’ve encountered several very successful JIRA implementations.  And I’ve encountered a number of failures.

Comparing my current experience of setting up a new “systems development” project in JIRA with those past experiences, now I understand the tipping point was a matter of component complexity.  JIRA’s architecture needs to be changed such that components can be containers for other objects, and can be versioned independently.  There are elegant/simple ways to introduce a data model which supports this, it will likely require them to refactor most (if not all) of their application stack.  Given their success with smaller projects, it’s easy to understand their business decision to defer these feature requests.

JIRA continues to recommended workarounds, and several 3rd party plugins attempt to address the gap.  Unfortunately, each of these workarounds are dependent upon the products internal data model and workflows.  JIRA themselves have discontinued development of features which support one of their suggested workarounds.  And some 3rd Party plugins have stopped development, most likely due to difficulties staying in sync with internal JIRA dependencies.

It can take six months to two years to get an HP ALM or IBM Rational solution running smoothly, and there are ongoing costs of operational support and training new developers.  However, there are use cases which justify those higher costs of doing business.

It’s unfortunate my current project will have to make do with creative workarounds.  But it has provided me an opportunity to better understand how these tools compare, and where the boundaries are for considering one versus the other.

Advertisements

Was “IP enabling” the OS Kernel, System Libraries, and Application Frameworks a mistake?

For the impatient reader… I’ll cut to point.  Yes, it was a mistake.  We are well beyond the point where a seemingly good idea has been taken to excess and evolved into a bad implementation.

How did I come to this conclusion?

I spend a lot of time evaluating applications (mobile, server, network elements, desktop, mainframe, and “other”).  I frequently encounter problems where a component was included in such a way that the software simply won’t work without a framework or library that should have been considered optional.

Frequently this occurs during a vulnerability assessment where an application is installed on a server which is denied GUI capabilities [a lot of developers hard code a “# Include” for GUI libraries even when providing command line capabilities, even for software target Unix servers].

I’ve also been encountering a lot of mobile apps which only provide single-user, offline features and have no use for or need of network communications capabilities.  Unfortunately I have to do a lot of extra work assessing these applications because OS Kernels, System Libraries, and Application Frameworks have all been “IP Enabled” by their vendors.

Consider this Apple iOS situation… If an application does not include the CoreLocation Framework, I can reasonably assume it’s not likely to use my GPS or location information and I can spend less time looking at those issues.  However, even a Notepad or Solitaire App has unfettered usage of NSUrl.   NSUrl is a primary mechanism for reading and writing files, both local and remote.  So I spend a lot of time looking for remote communications activity in Apps which shouldn’t even include the capability.

Some may believe the CFNetwork framework is required for network communications in an iOS App.  It’s not. The CFNetwork is only required when the developer wants to interact directly with the protocols at a lower level.  APIs like NSUrl are fully capable interacting with a wide variety of media (file) types from most anywhere; and it is not limited to the HTTP protocol.   

As we slowly move towards IPv6, and situations where devices will have a multitude of IPv6 addresses, the ability to distinguish desired communications activity from undesirable will get even more complicated.

Apple’s iOS NSUrl API isn’t the only example of this, it’s just one which a lot of folk are likely to recognize today.  In reality, most modern operating systems and bursting at the seams with these sorts of IP Enabled “features”.

So how did we get to a point where the OS Kernel, System Libraries, and Application Frameworks are so “IP Enabled” that the same API is used whether reading a local text file or a remote file (of any kind whatsoever)?

Explaining this situation may need a little history review…

Once upon a time, computer systems (and networks) did not speak IP.  There were numerous other communications protocols and each had it’s strengths, weaknesses, and appropriate applications.

During the 90’s, in the early days of what most people now recognize as the Internet, vendors of operating systems, programming languages, and application development tools embarked on an industry wide effort to adopt IP as the primary communication protocol for their products.  At the time, this seemed like a great idea… nearly universal interoperability.

* Although it was an industry wide effort, it wasn’t particularly coordinated or thoughtful on an industry scale.  Some folks tried to provide some thoughtful leadership, but mostly it was a Cannonball Run of vendors scrambling for an anticipated gold rush [which led to the industry’s financial implosion in the early 2000s].

Looking back, the effort occurred as a two phase process.  During phase one (of the great IP adoption), the product’s core functions continued to use previously existing internal protocols and an “IP Stack” was added to the product.  From a customer perspective, this usually satisfied initial expectations and requirements.

However, vendors felt competitive pressure to optimize their products.  Remember, the early to mid 90’s were the time of computers measured in Megahertz CPUs, 1MB of RAM was a high end system, and storage media was often measured in Kilobytes up to a few MB.  Network communications often utilized modems with speeds measured in bits per second.

Internally, CPUs and software application need to be able to pass information around.  Within a single application (or process), this is often done with memory pointers or some equivalent.  However, between processes or separate applications there needs to some communications protocol.

When systems or applications used one protocol for internal communications and then translated data to IP for external communications, many felt this translation process was too slow and consumed to many resources.  Another customer frustration rose from the initial practice of vendors shipping their product with only it’s native internal protocols and requiring customers to obtain an “IP Stack” from a 3rd party.  In the early days of Windows 3.x and even the initial version of Windows 95, it was common for the installed operating system to only contain a couple Microsoft LAN protocols.  IPX/SPX (Novell), SDLC/HDLC (IBM), AppleTalk, and TCP/IP all required installation of 3rd party software which Microsoft provided little or no support for.

In the mid to late 90’s there were many products available which provided multi-protocol translation services to both desktop operating systems and servers.  It was common to find “Multi-Protocol Router” products, usually software gateways, available for establishing (and controlling) communications between an organizations WinTel, Apple, Mainframe, and other environments.  These multi-protocol router applications could also serve as gateways and stateful application firewalls between internal environments and/or external EDI networks or the Internet.  Many similar products were available to the desktop for print gateways, internet proxies, access to EDI networks, remote dial / desktop control, and other services.

Amazon, Netscape, and Yahoo all came on the scene in 1994.  A lot of early investment were being made and many technical, economic, and social changes were coming together to increase demand for Internet technologies, products, and services.   And that demand was growing in both consumer and corporate markets.

So… it’s the mid to late 90’s.  All indicators are starting to scream that this Internet things is going to be big.  A lot of good multi-protocol technology already existed for getting people and systems connected to the Internet.  But system performance and customer satisfaction was still poor.  Vendors were shipping multi-processing systems, some multi-processor systems, and multi-threaded applications and customers were loading up more applications than anyone expected.  Web sites were emerging and growing faster than bandwidth and modem capabilities.  Vendors were scrambling to get in on the gold rush.  And the customer experience often perceived the multi-protocol stacks as performance bottlenecks and/or a source of many system errors.

In reality the problems often had more to due with poor thread management, synchronous queuing, and applications which generated excessive chatter or errors without actually crashing themselves (a misbehaving background task can easily convince a non-technical user that his foreground application isn’t working correctly).

In reality, many of the technologies available in the late 90’s simply were not ready for mass market. Many products were well suited to their intended task and performed quite well for organizations with appropriate support and reasonable expectations.  Unfortunately… reality, appropriateness, and reasonable expectations are seldom priorities when it comes to mass marketing to consumers.  Many technologies were sold to the consumer (and small business) markets before the products were sufficiently robust or stable.  Revenues flowed, advertising dollars and customer perceptions overwhelmed technological realities and in fact perceptions often became the effective reality.

As a result of these and other factors, the industry entered phase two (of the great IP adoption) as vendors began a rush to “IP Enable” their operating systems, system libraries, and application frameworks.  Across the industry a lot of products were being redesigned and code was being refactored.  Engineering priorities often included improving (or implementing for the first time):

  • multitasking – ie., running (or appearing to run) multiple applications at the same time
  • multithreading – splitting applications into multiple processes.  typically some work is sent to a background thread while trying to ensure the user interface or other input queues are kept responsive to new requests.
  • remote processing – enabling multiple applications to make service requests or share data.  RPC, CORBA, OLE, DDE, and JAVA RMI are a few examples of remote processing technologies.  Remote processing does not require, and is not restricted to, applications running on multiple physical servers in multiple locations.  It can, and most often does, occur between applications running on a single host computer within a single operating system instance.  A very common example of “remote processing” happening on a local computer would be using the MS Outlook application and selecting the “View Messages in Word” option.  The Outlook app invokes the Word app, sends it data, and sends it instructions on what to do.
  • asynchronous processing and communications – in an asynchronous process, components can work independently of each other.  For software applications, this usually involves some optimization of logic for queuing up multithreaded workloads and handling results.  For communications I/O (whether disk, memory, network, etc) this usually starts with increasing available channels so transmit and receive operations can occur simultaneously without collisions; next would be optimizing the distribution of activity across available channels.

Advancements in hardware technologies provided software engineers many reasons to rewrite and update their products.  Marketing’s demands for “IP Enabled” fit in nicely with these other priorities.  The engineers were also enamored with Internet communications and liked the idea of supporting fewer communications protocols.

  • At the time I met a lot of software engineers who had little or no idea of the size and complexity of the “TCP/IP Suite” which already existed in the mid 90s.  Even fewer could foresee the explosion of “protocol enhancements” which would follow.  Of the software engineers who actually create protocol implementations, many happily left IPX, SNA, NetBios, AppleTalk, and others in the dustbin of history… but I doubt you’ll find many who’d say life has gotten simpler since then.
  • IANA maintains a port numbering scheme for TCP and UDP protocols (mostly those which have been recognized thru the IETF RFC process).  At this time there are about 1,200 TCP/UDP protocols identified by the IANA registry.  Even within an “IP only” environment, this # is just a subset of the protocols available in the seven layer OSI stack.

What started for many product engineers (software and hardware) as an effort to make products compatible with IP soon became a very public optimization contest for vendors and their marketing organizations.

The race resulted in making IP the default protocol for inter-process communications and even intra-process communications… the birth of the “IP Enabled Operating System Kernel”.

The IP Enabled Kernel actually has two key characteristics.  Some instances may only have one of these, but many now have both.

The first characteristic could be describe as “optimization by inclusion”… or you might call it “kitchen sink compiling”.  Many of the networking functions which were previous performed by software modules external to the kernel were compiled into the kernel’s source code.  By doing this, the kernel and networking feature share the same physical memory space.  When the network function lived in a separate process, the kernel would need to physically copy data out to a new memory location which the network function could access.  When the two are combined, they can pass pointers to physical memory.  The result is a dramatic speed increase and reduction in I/O.

Image the user wants to send a local file from disk to a network location, but is using a computer system where everything is strictly separated into different application processes.  The System Kernel is in process #0.  The user is currently running application process #1.  The user request causes a file manager to be invoked in process #2.  And a network stack needs to be invoked in process #3.

In a well designed / optimized system, the file could be read directly from disk to the buffers of the network interface.  Proper process boundaries and virtual memory address management provide by the Kernel would prevent the User App, File Manager App, and Network App from knowing anything about each other or the Kernel… and the user’s request would be performed with a minimum of system resources.

Unfortunately, most systems today still aren’t that well designed.  A more common result was for the I/O to occur multiple times as the data traversed the various processes.  Or in even worse circumstances, the physical memory pointers were passed to all of  processes interested in this information and a bug in one would bring everything to a crash.

The “optimization by inclusion” approach has resulted in many of these functions being compiled into the OS Kernel [or into DLLs which are loaded into the kernel as the systems boots… with pretty much the same run time result].

The second characteristic of the IP Enabled Kernel could be described as “process optimization”.  This approach does several things:

  • organizes the application (process) logic as close to the OSI Layers as practical
  • arranges I/O and data chunks into sizes and patterns which are optimized for encapsulation within IP packets.  Network IP Interfaces have a setting called MTU (Maximum Transmission Unit).  If a Kernel process is handling some data which might eventually be sent to a network interface, passing that data around in chunks which fit perfectly into the MTU would be a potential optimization.
  • prefers and implements IP Protocols for inter- (and sometimes even intra- ) process communications.  This is one of the uses for the Loopback Address of 127.0.0.1

Over time, IP capabilities were made native to more and more OS components, system libraries, and app frameworks.

Today we’ve reach a point where vendors are trying to IP Enable our kitchen appliances.

* Actually some of the vendors tried this in the 90s, but the Utilities ignored them and many of the rest of us laughed at them.  Today the vendors are trying it again and people are starting to buy into it.  In some cases utilities have deployed smart grid products which unintentionally introduced IP capabilities on what were thought to be private, non-IP networks (both wireless and wireless).  NERC has begun intervening and requiring stricter technologies standards and security procedures for the utility industry.

I believe we’ve already went to far.

I’m not a security by obscurity fan who wants some mysterious black box kernel setting at the heart of my technology products.

Nor am I some sort of closet luddite who wants to shut down the internet.  I like shopping online, using electronic bill pay with automatic bookkeeping and no stamp licking, and digital media.

But I do think it’s time we seriously consider going to back to core components which don’t have native Internet capability.  Technology has reached the point where the potential workload from using multi-protocol gateway applications no longer presents a performance problem.

Firewalls and anti-malware tools have become de facto system requirements for everything.  IE., we’re already running the workload attempting to monitor IP-to-IP communications.  If we stopped allowing every little app, gadget, widget, process, and thread access to every feature of the IP Stack known to man, we could actually reduce the Firewall/Anti-Malware workload on our systems and achieve a higher level of confidence in monitoring being effective.

Memory virtualization and address randomization have evolved to the point where I/O can be optimized while still preventing processes which share data from knowing about each other or interfering with each other.

There’s no reason for an application to have Internet communications capability without expressively asking permission to load and utilize an appropriate framework.  At application run time the user / device owner should have the option of denying the application that capability when desired.

Security issues would improve with systems which:

  • Move network interfaces and protocols out of the OS Kernel.
    • Use a non-kernel process to access network interfaces.
    • Use non-IP protocols for inter- and intra- process communications.  When a process (even a kernel process) needs network services, require it to request permissions and translation services thru a non-root gateway.
  • The entire network communication stack should be moved to a “multi-protocol router and stateful application firewall service” running under a non root account.
    • One place to enable/disable communication services.
    • One place to monitor communications services.
    • But not an all-or-nothing architecture.  It should be easy to control which protocols are enabled or disabled.  Same with apps.  And same with inter-process/service communication.
  • These aren’t concepts which require a lot of “start-from-scratch” efforts to realize.  The application logic already exists.  We created the DEN and CIM specifications back in the late 90s specifically to provide an industry standard way of managing relationships between people, devices, applications, and services.  
  • In high security environments, this architecture is the required default.  It’s usually achieved thru a combination of OS Hardening and 3rd party security products.  The hardening process removes unnecessary packages from the system, restricts communications capabilities to specific services, and forces communications to pass thru the 3rd party security product for evaluation.
  • Bluetooth devices are for personal area networks.  They don’t need a publicly routable IP Enabled network stack.
    • Nor do my USB, Firewire, Audio and HDMI interfaces!
  • Start applications in a ‘least privilege’ mode and allow the user / device owner to approve activation of features.  If the app doesn’t work, or fail gracefully, in least privilege mode it shouldn’t pass QA.  [And the operating system shouldn’t let it run without a user override.]
  • The Apple iOS Privacy Settings panel demonstrates a good concept, that could be improved.  Important services, features, etc., which have privacy/security concerns should be isolated to specific Libraries and Frameworks.  Operating systems should provide users / device owners a mechanism to enable or disable entire frameworks as they choose.
    • Organizations with high security requirements have been playing whack-a-mole with Mobile Device Vendors over features like cameras, microphones, location tracking, and more.  While some organizations have had small successes getting policy management points built into Mobile Device Manager (MDM) products and Mobile Operating Systems… consumers have been left with little to no idea what their devices are doing or capable of doing.
    • New features should be linked to a framework and privacy control mechanism before the features GA release.

These issues don’t apply just to Smartphones, laptops, and other typical IT products.  These issues are just as important for automobiles, appliances, electronic healthcare products, home automation products, industrial robots, the emerging market of home assistive / personal robotics products, and any other new fangled gadgets coming along with abilities to store, process, or communicate information.

Some time ago, a TED talk described a “moral operating system”.  The speaker was describing the need for a system of morality for people… but I tend to take things literally, and kept returning to the idea of, “how could improve computer operating systems to facilitate these ideas?”

The obvious first step has been know for years.  Design systems so the default choice is typically the better choice.

Another requirement for this new operating system.  It needs to begin with the principle that everything on the disk / storage media belongs to the user or device owner.  It’s my information and I have a right to see it when I want to look at it.  It’s my information and I have a right to monitor which applications or processes have been accessing or  modifying it.  And I have a right to restrict which applications or processes can access my information on the disk or storage media.

I’m not daft.  I realize DRM isn’t going away anytime soon.  And I’m not here to argue over which DRM system, if any, is better than the other.  I believe an inherently secure, user-centric operating system can still accommodate a DRM’d service by:

  • giving me the choice to delegate  control of a storage location and control of an application sub process to the DRM service.
  • the delegated storage location could be an external media device I choose to dedicate to the service or, more likely, be an encrypted sparse disk image I choose to allow the service to create (at a file location of my choosing).
  • the delegated application “sub process” would likely be some sort of “certificate management” utility which kept the keys to the delegated storage location.
  • so long as I permit the “sub process” to run and don’t tamper with it, it would be able to verify it’s code signature and verify it’s certificates to provide sufficient assurance to the DRM’d content provider I’m following the terms of our agreement.
  • The DRM service should have absolutely no reach or influence within my computer system beyonds it’s application sandbox, it’s delegated sub process, and it’s delegated storage process.
  • If I wish to stop or delete the service, it should be as simple as exiting or deleting the application.  The only negative consequence should be loosing the ability to read the contents of the encrypted delegated storage area.  Deleting that storage remains my decision, and so does the option of re-installing the App to restore access to the DRM’d media.

In addition to the Virtual Memory Addressing, Memory Address Randomization, and Memory Encryption architectures which have been implemented for computer RAM… I’d also like to see similar architectural changes for how applications are allowed to interact with the file system.

For example, some features might include:

  • Restrict sandboxed applications to a virtual file system using encryption and address randomization instead of allowing the application access to any part of the real file system.
  • Give the user controls to provide an application with access to “the file system framework” so it can interact with things outside it’s sandbox.  Include some granular choices such as file, directory, or “other app’s data”… with standard file permission options still available also.
  • Just as it may be reasonable to expect an application to ask permission to use a NetworkFramework to communicate outside of it’s sandbox, it should also be reasonable for an application to need permissions for a FileSystemFramework before interacting with data/media outside of it’s sandbox.

Again, to summarize in short form, these few key changes could improve the inherent security of many computer / electronic products:

  • Take the network interfaces out of the OS Kernel.
  • Take the network protocols out of the OS Kernel.
  • Direct all network communications thru a non-root multi-protocol router and stateful packet inspection service.
  • Wrap major product features in a system framework and give the user control over whether that framework is accessible on their device, and by which apps/services if they choose to enable.
  • Always start things in least privilege mode until the owner approves more access.
  • Always start from a place which acknowledges the user’s ownership of information and preserve the user’s ownership and rights.
    • Only the owner can choose to delegate control [the operating system and 3rd party applications cannot arbitrarily grant themselves control over the user’s information].
    • Only the owner can choose to provide access to information.
    • And, only the owner can choose to disclose information.

In real, day to day terms… these architectural changes would not require large shifts in the way most developers and engineer go about building their products.  Very few software engineers actually write protocol stacks, kernels, or system frameworks.  For everyone else writing software, the difference between including a framework in your application vs “getting it for free” from the operating system can be as simple as a checkbox or a “# Include”.

The biggest effort, and most important work, is for the kernel and framework developers to adopt architectures which default to inherently safe security configurations and give users control over whether frameworks/features are enabled.

The Linux and Unix communities already have secure OS implementations which achieve some of these goals.  Apple, Oracle, Redhat, and Novell all share some responsibilities for completing the architecture and making it standard in their products.  Microsoft probably has the most baggage to overcome.

Many others in the IT Industry also share responsibilities in making these sort of changes.  Nokia, Siemens, Samsung, Blackberry, Google/Motorola, HP, IBM, and Cisco all need to step up.

Some, such as Symantec, stand to loose some market share if OS Vendors finally step up and fulfill their responsibilities.

Intel, AMD, Motorola, Qualcomm, and TI all have a stake in this as well. Intel is easily in the leadership position right now, since their acquisition of McAfee was explained as being done for the express purpose of introducing more security capabilities directly into the CPU and reducing the need for complex 3rd party products to be loaded by the customer after the system purchase.

Listing the CPU manufacturers brings me to my final point for the security architecture recommendations.  To some extent, this one is mostly on the CPU makers, but coordinating with the OS makers will help.

Enough with the kitchen sink “system on chip” approach.  Yeah, it’s a great idea.  But overdoing it is like combing a Super-Walmart, a Cabelas, a college dorm complex, a hospital, and super-max prison all into a Seven Eleven.  Who decided the most trusted processes and the least trusted processes should run on the same chip?  The $99 smartphone of today provides as much or more processing capability as $200,000 systems available at the time many of these architectural decisions were made.  Inertia has kept us on course.  It’s time to reconsider old design decisions.

After decades of watching more capabilities be combined into a single chip, I’m no longer convinced it’s the great idea it started out to be.  Keep making things more energy efficient, smaller, lighter, and faster.  But consider backing off the physical co-mingling of chip capabilities.  Consider fencing some of these untrusted communication services off to a component chip and working with the Secure OS makers to build good gateway processes and frameworks for controlling the flow of data.

As for multi-core CPUs… in consumer devices, I still haven’t seen many examples of workloads (applications) which can properly utilize four or more cores… and I’ve seen even fewer examples of consumer workloads which actually need to do so for more than a fraction of a second.  Very few consumers run video rendering processes, and even fewer run multiple virtual machines in a continuous build development process.

On the other hand, I believe a currently underdeveloped chip feature which could provide immediate benefits to consumer and business markets combines secure I/O and secure storage.  In our laptops and smartphones, and other similar devices, our media libraries (video, audio, photos) have grown large, but much of our critical personal information fits within just a few megabytes up to a couple GB for those who have been paperless longer.  CPU and OS makers should look at implementing physical and logical pathways dedicated to providing the user with a secure data vault.  It could utilize any number of different implementation strategies.  Some Flash on the motherboard, a CPU pathway available to a specific USB/MicroSD slot or to an optional region/address within an SSD, or something we haven’t even thought of yet.  Whatever it ends up to be, just make sure it provides the user with a means of vaulting relatively small amounts of critical data away from their primary storage disk in some way that allowing an app generic/general file system access would still be physically and logically isolated from the vault.  The best explanation of my interest here may be Keychain on steroids… put the data at a location physically separate from the regular storage disk, use a different file system, different encryption protocol and key, implement a coordinated CPU and OS architecture which requires all access to the vault be shunted thru specific/dedicated frameworks and gateway services (ie., not directly accessible to regular OS and App process).

Personally, I have three applications which I use in a “data vault” fashion and would benefit from this architecture.  I only run them occasionally, when I need them.  They already have extra security controls invoked with launching the apps.  Placing the app data into a secure vault protected by physical and logical separations including a security framework and control gateway would both improve and simply the scenario I currently use. There is a market for this kind of functionality.  If there wasn’t, products like 1Password and RSA SecureID would not exist.

In product areas outside traditional tech markets, vendors are already running into challenges to the System-On-a-Chip (SOC) trend. NERC recently began requiring smart-grid device makers producing residential smart meters to separate functionality into at least three physically separate portions within the device.  One trusted portion of the devices are required to undergo extensive certification testing (every version of hardware and everyone version of software, even minor updates).  This portion would be allowed to communicate with utility grid control systems.  A second, less trusted, and optional, portion could be implementing for local maintenance access (local/downstream only, no grid/upstream access).  A third, and mostly untrusted, portion is provided for consumer facing services which have the high risk issues of being available to the consumers home network and also getting frequent software updates as consumer features are continuously developed.  Overuse of SOC architectures compromised the entire smart grid.  This mandatory chip/feature segregation is critical to the utility industry and provides benefits none of the other proposed architectures can match.

Automotive, aviation, and many other industry segments have similar requirements for architectural physical separation of features.  We need to recognize the value of dis-integration in consumer products as well.

It’s almost ironic that much of the IT industry has been loudly espousing the benefits of loose coupling and dis-integration for a decade or more.  But yet most of the industry overlooked the increasingly tight coupling between Operating Systems and Network Stacks.

Adopting a new operating system always involves a learning curve.  But I look forward to learning a new modern and inherently secure OS that doesn’t have built in IP support.

And if anyone expects to sell me a self driving vehicle or a personal robot (next year or 20 years from now), consider this early notice of my #1 priority.  An inherently secure design.

In fact, for cars and robots… the world would probably be better off if the communications capabilities were removed to physically separate chips and I/O pathways between the CPU and CommChips controlled by physical switches or keys.  Turning off the switch or removing a key should permit the device to otherwise operate normally… just prevent it from getting new instructions from the neighbor kid while we’re sleeping or gone fishing.

Installer quit unexpectedly… root cause missing JavaLaunching.framework

Error logs indicate the system needs Java for the package installer to work properly.  As verified and documented below, it does not; it only needs a “framework folder” containing some reference information and will work just fine without a JRE, JVM, JDK, or any other part of Java.

This applies to (ie., verified on) 2013-03-31:  OS X Mountain Lion 10.8.3  
Given the age/history of the Installer utility, this issue probably applies to many other versions of OS X.

Attempts to install anything from a *.pkg results in “Installer quit unexpectedly.” error message.

Below is a snippet taken from multiple error logs resulting from trying to install different packages (two 3rd party apps, two Apple developer downloads, one OS X update, and one Oracle Java update).  All six pkg installations failed with error logs containing these same lines.

The problem… a few weeks ago I scrubbed the system of all traces of Java that I could find.  In light of all the recent Java exploit news, I wanted to test how complicated it would be to completely purge Java from the system; and I wanted to see if doing so would bring any unexpected consequences.  Well, removing all traces is complicated (and difficult to verify with certainty).  Although everything on the system continued to work fine, at first, there are unexpected dependencies… such as Apple referencing a Java Framework within their package installer.

Given the amount of “stuff” Apple OS X inherited from Sun Solaris over the years, this shouldn’t come as too big a surprise.  Sun used to regularly hard code Java Library dependencies into products which didn’t actually need them.  While the text of the error messages are different (but not by much), this is the exact behavior encountered installing certain Sun applications from the command line on headless servers… i.e.., the server had absolutely no reason to load a graphical environment, but the software installer was hard coded to look for the Java GUI packages.  For environments where security policies forbid unnecessary packages, we’d isolate the server, load the extra packages, complete the installation, and the remove the unnecessary packages… after the initial install/config was worked out, the organization would use DR processes to build new instances (eliminating the need to keep fiddling with those GUI packages).

ByTheWay, command line “Installer -vers” outputs “… v. 1.5.0 Copywright (c) 1999-2006 … ” That does match up with the time frame when code was being merged into OS X 10.4 from Solaris 10.

The fix… well, four of the above package installation attempts were attempts at fixing.  This is why hard coding dependencies is a bad idea; attempting to install a package that would solve the problem requires the referenced package, and fails.  After some research (and documentation of activity thus far), I tried another command line

         sudo installer -pkg /Volumes/OS\ X\ 10.8.3\ Update\ Combo/OSXUpdCombo10.8.3.pkg -target /

oops… got ahead of myself there.  That one may have been like swatting flies with a canon and I’d intended to try various command line options on the java packages first.  It’s done and requesting system restart now.  Didn’t matter, it didn’t solve the problem.  The framework was not restored and normal package installations are still failing.

Next I tried some other command line combinations and also tried extracting the packages to see if the frameworks could be manually located.  No joy with either approach.

Running out of options, but before I tried any of the operating system recovery / re-installation choices, I’ll try restoring the framework from TimeMachine.

Looking thru my backups, I found \System\Library\PrivateFrameworks\JavaLaunching.framework (and \JavaApplicationLauncher.framework) from a few weeks ago and restored the folder to it’s original location.

Result… restoring just 375KB of framework folders fixed the problem and the OS X package installer is working again.  It was not necessary to restore/install a JRE or JVM or any other part of Java.  It just needed required folder containing references to a Java framework.  Installer doesn’t need Java and it doesn’t use Java, but it was hard coded to require the presense of a symbolic reference.

 

Moral of the story…

  • Java – annoying.
  • hard coding artificial dependencies – shouldn’t be allowed to escape unit testing let alone make it into GA production release software.
  • backups which work and provide options for partial restores – everyone should have them.
  • Even simple steps for hardening a consumer operating system quickly become complicated, but can usually be resolved without to much fuss.
  • Code templates with a lot of “# Includes” may be convenient for developers but often
    1. present a headache for users – by creating required dependencies which should have been optional.
    2. introduce vulnerabilities – by requiring components which aren’t necessary in the target deployment environment.
    3. present long term maintenance problems – by making an entire application dependent on something which should have been an optional feature.
The problem report includes these lines:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Application Specific Information:
dyld: launch, loading dependent libraries
Dyld Error Message:
  Library not loaded: /System/Library/PrivateFrameworks/JavaLaunching.framework/Versions/A/JavaLaunching
  Referenced from: /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer
  Reason: image not found
Binary Images:
       #x######### – #x######### com.apple.installer (6.0 – 614) <3E180768-4C29-3B0D-A47D-F4A23760F824> /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer
       #x######### – #x######### com.apple.GraphKit (1.0.5 – 30) <5ECA4744-FFA8-3CF0-BC20-3B2AD16AD93C> /System/Library/PrivateFrameworks/GraphKit.framework/Versions/A/GraphKit
       #x######### – #x######### com.apple.securityinterface (6.0 – 55024.4) <FCF87CA0-CDC1-3F7C-AADA-2AC3FE4E97BD> /System/Library/Frameworks/SecurityInterface.framework/Versions/A/SecurityInterface
    #x######### – #x######### dyld (210.2.3) <A40597AA-5529-3337-8C09-D8A014EB1578> /usr/lib/dyld
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Midwest gets surprise snow forecast in February

In todays news… Midwest gets surprise snow forecast in February.

Does anyone in the media realize how they sound when acting surprised by snow in the winter?  This does occur about the same time every year.

Will Cloud Computing be the Sub-Prime of IT?

Sub-prime lending enabled a great quantity of borrowers, lenders, and investors to participate in markets and transactions which they most often did not fully understand. In many (perhaps most) cases they did not understand the risk involved, did not understand the contracts they entered into, and were completely unprepared when risks became realities.

Similarly, public cloud computing services are enabling great quantities of new customers, service providers, and investors to make low cost entries into complex transactions with many poorly understood or entirely unknown risks.

The often low upfront costs combined with rapid activation processes for public cloud services are enticing to many cost conscious organizations. However, many of these services have complex pay as you go usage rates which can result in surprisingly high fees as the services are rolled out to more users and those services become a key component of the users regularly workflows.

Many public cloud services start out with low introductory rates which go up over time.  The pricing plans rely on the same psychology as introductory cable subscriptions and adjustable rate mortgages.

Additionally, there is often an inexpensive package rate which provides modest service usage allowances. Like many current cell phone data plans, once those usage limits are reached, additional fees automatically accumulate for:

  • CPU usage – sometimes measured in seconds or microseconds per processor core, but often priced as any portion of an hour.
  • Memory usage – measured by the amount of RAM allocated to a server or application instance.
  • Storage usage – usually measured by GB of disk space utilized, but sometimes still priced by the MB.  Sometimes charged even if only allocated but not utilized.
  • data transfer – often measured in GB inbound and/or outbound from the service. Many providers may charge data transfer fees for moving data between server (or service) instances within the same account.
  • IO – these is often nebulous and difficult to estimate in advance.  In the simplest definition, IO stands for Input and Output.  Many technology professionals get mired in long debates about how to measure or forecast IOs and what sort of computer activities should be considered.  It’s a term that is often applied to accessing a disk to load information into memory, or to write information from memory to disk.  If a service plan includes charges for IOs, it’s important the customer understand what they could be charged for.  A misbehaving application, operating systems, or hardware component can cause significant amounts of IO activity very quickly.

User accounts, concurrent user sessions, static IP addresses, data backups, geographic distribution or redundancy, encryption certificates and services, service monitoring, and even usage reporting are some examples of “add-ons” which providers will up sell for additional fees.

It is also common for public cloud service providers to tout a list of high profile clients. It would be a mistake to believe the provider offers the same level of service, support, and security to all of their customers. Amazon, Google, and Microsoft offer their largest customers dedicated facilities with dedicated staff who follow the customer’s approved operational and security procedures. Most customers do not have that kind of purchasing power.  Although the service providers marketing may tout these sort of high profile clients, those customers may well be paying for a Private Cloud.

Private Cloud solutions are typically the current marketing terminology for situations where a customer organization outsources hardware, software, and operations to a third party and contracts the solution as an “Operational Expense” rather than making any upfront “Capital Expenditures” for procurement of assets.

* Op-Ex vs Cap-Ex is often utilized as an accounting gimmick to help a company present favorable financial statements to Wall Street.  There are many ways an organization can abuse this and I’ve seen some doozies. 

Two key attractions for service providers considering a public cloud offering are the Monthly Recurring Charge (MRC) and auto renewing contracts.  The longer a subscriber stays with the service, the more profitable they become for the provider. Service providers can forecast lower future costs due to several factors:

  • Technology products (particularly hard drives, CPUs, memory, and networking gear) continue to get cheaper and faster.
  • The service provider may be able to use open source software for many of the infrastructure services which an Enterprise Organization might have purchased from IBM, Microsoft, or Oracle.  The customer organization could achieve these same savings internally, but is often uncomfortable and unfamiliar with the technologies and unwilling to invest in the workforce development needed to make the transition.
  • The service provider may also be able to utilize volume discounts to procure software licenses at a lower cost then their individual customers could.  For small customer organizations this often holds true.  For larger enterprise organizations this is usually a false selling point as the enterprise should have an internal purchasing department to monitor vendor pricing and negotiate as needed.  Unfortunately many large organizations can be something of a dysfunctional family and there may not be a good relationship between IT, customer business units, and corporate procurement.  Some executives will see outsourcing opportunities as the “easy way out” vs solving internal organizational issues.
  • Off-shore labor pools are continuing to grow both in size and in capability.  Additionally, the current economic circumstances have been holding down first world labor rates.
  • Service Providers can and do resell and out source with other Service Providers.  In the mobile phone industry there are numerous Mobile Virtual Network Operators (MVNOs) who contract for bulk rate services from traditional carriers and then market those services for re-sell under their own branding and pricing plans.  Many cloud service providers have adopted similar business models.

All of these cost factors contribute to the service provider’s ability to develop a compelling business case to its investors.

The subprime market imploded with disastrous consequences when several market conditions changed. New construction saturated many markets and slowed or reversed price trends. Many customers found they couldn’t afford the products and left the market (often thru foreclosures which furthered the oversupply). Many other customers recognized the price increases built into their contracts (variable rate mortgages) and returned to more traditional products (by refinancing to conventional loans). And many sub-prime lenders were found to have engaged in questionable business practices (occasionally fraudulent, often just plain stupid) which eventually forced them out of the business while leaving their customers and investors to clean up the mess.

Like the housing market, public cloud computing is on course to create an oversupply. Many of these cloud providers are signing customers up for contracts and pricing models which will be invalidated in a short time (as processing, storage, and bandwidth continue to get faster and cheaper). And few, if any, of these providers understand the risk environment within which they operate.

Public cloud computing is sure to have a long future for “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.

For personal and organizational computing of “inherently private” data the long value is questionable, and should be questioned.

Current public cloud services offer many customers a cost advantage for CPU processing. It also offers some customers a price advantage for data storage, but few organizations have needs for so called “big data”.  The primary advantage of public cloud services to many organizations is distributed access to shared storage via cheap bandwidth.

Competing on price is always a race to the bottom.  And that is a race very few ever truly win.

Public cloud service providers face significant business risks from price competition and oversupply.  We saw what happened to the IT industry in the early 2000‘s and these were two key factors.

Another factor is declining customer demand.  The capabilities of mobile computing and the capabilities of low cost on-site systems continues to grow rapidly.  In todays pricing, it may be cheaper to host an application in the cloud than to provide enough bandwidth at the corporate office(s) for mobile workers.  That is changing rapidly.

A T1 1.5MB connection used to cost a business several thousand dollars per month.  Now most can get 15MB to 100MB for $79 per month.  As last mile fiber connectivity continues to be deployed, we’ll see many business locations have access to 1GB connections for less than $100 per month.

All of those factors are trumped by one monster of a business risk facing public cloud service providers and customers today.  How should they manage the security of inherently private data.

Many organizations have little to no idea of how to approach data classification, risk assessment, and risk mitigation.   Even the largest organizations of the critical infrastructure industries are struggling with the pace of change, so it’s no surprise that everyone is else behind on this topic.  Additionally, the legal and regulatory systems around the world are still learning how to respond to these topics.

Outsourcing the processing, storage, and/or protection of inherently private data does not relieve an organization from it’s responsibilities to customers, auditors, regulators, investors or other parties who may have a valid interest.

Standards, regulations, and customer expectations are evolving.  What seems reasonable and prudent to an operations manager in a mid-sized organization might appear negligent to an auditor, regulator, or jury.  What seems ok and safe today could have disastrous consequences down the road.

Unless your organization is well versed in data classification and protection, and has the ability to verify a service providers operational practices, I strongly recommend approaching Public Cloud services with extreme caution.

If your organization is not inherently part of the public web services “eco-system”, it would be prudent to restrict your interactions with Public Cloud computing to “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.  At least until the world understands it a bit better.

The costs of processing and storage private data will continue to get cheaper.  If you’re not able to handle your private data needs in house there are still plenty of colocation and hosting services to consider.  But before you start outsourcing, do some thoughtful housekeeping.  Really, if your organization has private data which does not provide enough value to justify in house processing, storage, and protection… please ask yourselves why you even have this data in the first place.

Industry Architecture – it’s a tad more complicated than Enterprise Architecture

While trying to “summarize” some of my previous work for the Service Provider industry, I realized it appears no one has coined the phrase “Industry Architecture”.

So… I’m staking my claim to the phrase now.

Think I’ll update the title on my business cards.

Here’s the mission of an Industry Architect:

Develop a new technology which gets more useful as more organizations and people use it.  To reach it’s potential, it will need so many users that it will require multiple vendors, manufacturers, developers, and large customers (such as Tier 1 Service Providers) to adopt it.  In fact it will need multiple standards bodies and regulatory agencies to get on board.

As a result of user adoptions, old companies may likely cease to exist.  New companies will emerge.  If successful it could even change behaviors on a global scale.

That’s Industry Architecture.

test post re yahoo social

Just testing a WordPress-to-Yahoo connection feature.  Wondering what shows up if I enable WordPress “publicize” to post a Yahoo profile… and wondering where it shows up at Yahoo’s websites.

update 1: guess you have to have an active Yahoo profile first… enabled that, and trying the post again.

update 2: after digging around in Yahoo for awhile and trying to figure out where these updates would be visible… i finally found this blurb of information from Yahoo:

Where can I see updates I'm sharing?
You can see your shared updates on Yahoo! Messenger for Windows.

Well, I’m sure that might be useful for someone…

Other digging around Yahoo’s site regardless social and social APIs found notices they’ll be shutting down several of their services April 16, 2013.

Fix repeating iTunes prompt to accept incoming network connections.

Do you want the application iTunes.app to accept incoming network connectionsDo you want the application ‘iTunes.app’ to accept incoming network connections?

If you get this prompt every time you start iTunes, it’s probably an issue with the application contents vs the application’s code signature.  Users seem to be encountering this sort of problem more frequently since Apple’s introduction of additional code signing, sandboxing, and GateKeeper functions in Mountain Lion.

Normally, this terminal command:

$ codesign -vvv /Applications/iTunes.app

should result in this:

/Applications/iTunes.app: valid on disk
/Applications/iTunes.app: satisfies its Designated Requirement

If not, then the application package contents are probably mucked up.  I recently encountered this situation as the command results showed a lot of extra files in the package.  Probably leftovers from an update.

Most of the recommending fixes involve deleting iTunes and reinstalling from a fresh download.  However, Mountain Lion won’t let you delete iTunes… say’s “can’t be modified or deleted because it’s required by Mac OS X.”

Some folk have had success by simply running the installer anyway.  But in my case, the extra files weren’t removed.  Instead I found I could right click on the /Applications/iTunes.app package, and “Show Package Contents“.  Once inside the package, I could delete the contents.  I simply deleted the entire “contents” directory, and then installed iTunes using a new download from the website.  The terminal command “codesign” then generated the correct results and the Firewall prompts stopped repeating.

 

FYI. This occurred after a clean re-install of OS X 10.8.2 Mountain Lion on a 2012 MacBook Pro.  I did a re-install to clean up problems from a number of app testing sessions and restore things to clean settings.  After running available software updates from the App Store the iTunes / Firewall prompts began occurring each time I started iTunes.  Most likely there was some issue with the App Store update for iTunes not cleaning up old files correctly.

COTS and corporate consumerism.

It is truly amazing the number of companies who go on consumeristic shopping sprees buying so called “COTS packages” in hopes of instant gratification.

The marketing wordsmiths of the software industry have achieved great results in convincing folk the definition of COTS is something like:

Short for commercial off-the-shelf, an adjective that describes software or hardware products that are ready-made and available for sale to the general public. For example, Microsoft Office is a COTS product that is a packaged software solution for businesses. COTS products are designed to be implemented easily into existing systems without the need for customization.

Sounds great doesn’t it.  Here is a portion of an alternate definition which rarely makes it into the marketing brochures:

“typically requires configuration that is tailored for specific uses”

That snippet is from US Federal Acquisition Regulations for “Commercial off-the-shelf” purchases.

In other words, most of the COTS packages should at least come with a “some assembly required” label on the box.  Granted, most vendors do disclose the product will need some configuration.  But most gloss over the level of effort involved, or sell it as another feature.  And most organizations seem to assign procurement decisions to those least able to accurately estimate implementation requirements.

The most offensive of these scenarios involves developer tools and prepackaged application components for software development shops. SDKs and APIs are not even close to being a true COTS product, but numerous vendors will sell them to unsuspecting customers as “ready to use” applications.

If the organization has a team of competent software developers… then really, what is the point of purchasing a “COTS” package which requires more customization (through custom software development) than just developing the features internally?

Some vendors have sold the idea that they provide additional benefits you wouldn’t get from developing it internally.  Such as:

  • packaged documentation comes with the software.
  • vendor gets feedback from many customers and makes it better for everyone.
  • vendor specializes in supporting the product.

Those are all suspect.

  • If the product requires customization, will the vendor provide custom documentation?  If not, their pre-packaged documentation will likely be useless.  The only authoritative source of documentation for source code… is the source code.  Good coding standards, including commenting and version control statements, will provide far more value than a collection of PDFs from VendorX.
    • Can the vendor provide an IDE plug-in which integrates Class, Method, API, and Interface documentation with the rest of your language environment?
    • Can the vendor be expected to keep these up date for the development tools your team uses?
  • Increasingly,  Vendors are no longer the best or primary source of product information.  User communities increasingly evolve independently of specific vendors.  Many online user communities begin with the overall service or concept involved, and develop sub groups for specific vendor products.  As a result, it is increasingly easier to compare and contrast information for many competing products at a site which shares common interests and contexts.
  • Vendor support comes in many flavors, and not all of it equally useful (or affordable) to all customers.
    • If the customer configuration is complex, continuity of support personnel is important.  Dedicated support from a large software vendor can run $1 Million per year per named individual providing support.  Otherwise your support calls go into the general queue with no continuity.
    • Large (publicly traded) software vendors operate on a financial basis which makes it difficult for them to run large scale professional services businesses.  Most every company that tries to combine product with large scale (i.e. thousands of staff consultants) professional services eventually implodes due to the cultural and financial conflicts between the two lines of business.

Failed software implementations can drive a company into the ground.  Complex COTS packages which only serve as a component to be “integrated” into customer systems through custom programming can often be a major contributing factor to project/program failures.  The larger the failure, the less likely the organization can retain sufficient stakeholder trust to try again.

Organizations with existing capabilities for large scale internal software development should reconsider the mantra of “All COTS, all the time, everywhere.”

US corporate financial practices haven’t just indoctrinated the citizenry into consumerism.  They’ve equally indoctrinated organizations of all kind.  Before you make that next COTS purchase order, pause, and give a moments consideration to “producerism”.  The long term benefits could be startling.

By the way, this phenomenon isn’t limited to software components.  I’ve seen organizations procure “appliances” at six figure costs because they perceived it to provide an application service which would save them $1 or $2 Million in software development costs downstream.  Unfortunately, they eventually learned it would require an additional $2 to $5 Million of software development costs to modify their application portfolio to work with these appliances.  After spending (wasting) 18 months and over $1 Million, they eventually found a solution they implemented internally with very little cost (simply replaced an old/deprecated programming language API with a newer one).