Delete individual items from OS X 10.11 El Capitan “Messages 9.0” app.

To delete individual items from OS X 10.11 El Capitan “Messages 9.0” app.   *this is for deleting specific messages within a conversation thread.  if the same Messages/iMessage account is active on multiple MacBooks, this will likely only delete the items on the specific macbook where you do this (ie., it doesn’t auto-delete the items from all connected devices).

  1. press and hold the command key
  2. click each of the individual messages to delete from a conversation.
  3. use the delete key.

It’s easy once you know the option is there, but it’s not obvious.


To delete individual items from a Messages conversation in iOS 9

  1. press and hold a finger on one of the items to be delete.
  2. look for the ” Copy | More… ” popup to appear.
  3. tap “More…”, now the selected message will get a checkmark and an option to delete.
    • if there are multiple items in the conversation, each will now have a circle which can be checked (tapped) to select for deletion.
  4. tap the circles to select any additional messages to be deleted.
  5. tap “Delete All” at the top left of the screen.

*This doesn’t delete message items from other devices.  Nor does it remove/retract sent items from recipients.

Advertisements

Was “IP enabling” the OS Kernel, System Libraries, and Application Frameworks a mistake?

For the impatient reader… I’ll cut to point.  Yes, it was a mistake.  We are well beyond the point where a seemingly good idea has been taken to excess and evolved into a bad implementation.

How did I come to this conclusion?

I spend a lot of time evaluating applications (mobile, server, network elements, desktop, mainframe, and “other”).  I frequently encounter problems where a component was included in such a way that the software simply won’t work without a framework or library that should have been considered optional.

Frequently this occurs during a vulnerability assessment where an application is installed on a server which is denied GUI capabilities [a lot of developers hard code a “# Include” for GUI libraries even when providing command line capabilities, even for software target Unix servers].

I’ve also been encountering a lot of mobile apps which only provide single-user, offline features and have no use for or need of network communications capabilities.  Unfortunately I have to do a lot of extra work assessing these applications because OS Kernels, System Libraries, and Application Frameworks have all been “IP Enabled” by their vendors.

Consider this Apple iOS situation… If an application does not include the CoreLocation Framework, I can reasonably assume it’s not likely to use my GPS or location information and I can spend less time looking at those issues.  However, even a Notepad or Solitaire App has unfettered usage of NSUrl.   NSUrl is a primary mechanism for reading and writing files, both local and remote.  So I spend a lot of time looking for remote communications activity in Apps which shouldn’t even include the capability.

Some may believe the CFNetwork framework is required for network communications in an iOS App.  It’s not. The CFNetwork is only required when the developer wants to interact directly with the protocols at a lower level.  APIs like NSUrl are fully capable interacting with a wide variety of media (file) types from most anywhere; and it is not limited to the HTTP protocol.   

As we slowly move towards IPv6, and situations where devices will have a multitude of IPv6 addresses, the ability to distinguish desired communications activity from undesirable will get even more complicated.

Apple’s iOS NSUrl API isn’t the only example of this, it’s just one which a lot of folk are likely to recognize today.  In reality, most modern operating systems and bursting at the seams with these sorts of IP Enabled “features”.

So how did we get to a point where the OS Kernel, System Libraries, and Application Frameworks are so “IP Enabled” that the same API is used whether reading a local text file or a remote file (of any kind whatsoever)?

Explaining this situation may need a little history review…

Once upon a time, computer systems (and networks) did not speak IP.  There were numerous other communications protocols and each had it’s strengths, weaknesses, and appropriate applications.

During the 90’s, in the early days of what most people now recognize as the Internet, vendors of operating systems, programming languages, and application development tools embarked on an industry wide effort to adopt IP as the primary communication protocol for their products.  At the time, this seemed like a great idea… nearly universal interoperability.

* Although it was an industry wide effort, it wasn’t particularly coordinated or thoughtful on an industry scale.  Some folks tried to provide some thoughtful leadership, but mostly it was a Cannonball Run of vendors scrambling for an anticipated gold rush [which led to the industry’s financial implosion in the early 2000s].

Looking back, the effort occurred as a two phase process.  During phase one (of the great IP adoption), the product’s core functions continued to use previously existing internal protocols and an “IP Stack” was added to the product.  From a customer perspective, this usually satisfied initial expectations and requirements.

However, vendors felt competitive pressure to optimize their products.  Remember, the early to mid 90’s were the time of computers measured in Megahertz CPUs, 1MB of RAM was a high end system, and storage media was often measured in Kilobytes up to a few MB.  Network communications often utilized modems with speeds measured in bits per second.

Internally, CPUs and software application need to be able to pass information around.  Within a single application (or process), this is often done with memory pointers or some equivalent.  However, between processes or separate applications there needs to some communications protocol.

When systems or applications used one protocol for internal communications and then translated data to IP for external communications, many felt this translation process was too slow and consumed to many resources.  Another customer frustration rose from the initial practice of vendors shipping their product with only it’s native internal protocols and requiring customers to obtain an “IP Stack” from a 3rd party.  In the early days of Windows 3.x and even the initial version of Windows 95, it was common for the installed operating system to only contain a couple Microsoft LAN protocols.  IPX/SPX (Novell), SDLC/HDLC (IBM), AppleTalk, and TCP/IP all required installation of 3rd party software which Microsoft provided little or no support for.

In the mid to late 90’s there were many products available which provided multi-protocol translation services to both desktop operating systems and servers.  It was common to find “Multi-Protocol Router” products, usually software gateways, available for establishing (and controlling) communications between an organizations WinTel, Apple, Mainframe, and other environments.  These multi-protocol router applications could also serve as gateways and stateful application firewalls between internal environments and/or external EDI networks or the Internet.  Many similar products were available to the desktop for print gateways, internet proxies, access to EDI networks, remote dial / desktop control, and other services.

Amazon, Netscape, and Yahoo all came on the scene in 1994.  A lot of early investment were being made and many technical, economic, and social changes were coming together to increase demand for Internet technologies, products, and services.   And that demand was growing in both consumer and corporate markets.

So… it’s the mid to late 90’s.  All indicators are starting to scream that this Internet things is going to be big.  A lot of good multi-protocol technology already existed for getting people and systems connected to the Internet.  But system performance and customer satisfaction was still poor.  Vendors were shipping multi-processing systems, some multi-processor systems, and multi-threaded applications and customers were loading up more applications than anyone expected.  Web sites were emerging and growing faster than bandwidth and modem capabilities.  Vendors were scrambling to get in on the gold rush.  And the customer experience often perceived the multi-protocol stacks as performance bottlenecks and/or a source of many system errors.

In reality the problems often had more to due with poor thread management, synchronous queuing, and applications which generated excessive chatter or errors without actually crashing themselves (a misbehaving background task can easily convince a non-technical user that his foreground application isn’t working correctly).

In reality, many of the technologies available in the late 90’s simply were not ready for mass market. Many products were well suited to their intended task and performed quite well for organizations with appropriate support and reasonable expectations.  Unfortunately… reality, appropriateness, and reasonable expectations are seldom priorities when it comes to mass marketing to consumers.  Many technologies were sold to the consumer (and small business) markets before the products were sufficiently robust or stable.  Revenues flowed, advertising dollars and customer perceptions overwhelmed technological realities and in fact perceptions often became the effective reality.

As a result of these and other factors, the industry entered phase two (of the great IP adoption) as vendors began a rush to “IP Enable” their operating systems, system libraries, and application frameworks.  Across the industry a lot of products were being redesigned and code was being refactored.  Engineering priorities often included improving (or implementing for the first time):

  • multitasking – ie., running (or appearing to run) multiple applications at the same time
  • multithreading – splitting applications into multiple processes.  typically some work is sent to a background thread while trying to ensure the user interface or other input queues are kept responsive to new requests.
  • remote processing – enabling multiple applications to make service requests or share data.  RPC, CORBA, OLE, DDE, and JAVA RMI are a few examples of remote processing technologies.  Remote processing does not require, and is not restricted to, applications running on multiple physical servers in multiple locations.  It can, and most often does, occur between applications running on a single host computer within a single operating system instance.  A very common example of “remote processing” happening on a local computer would be using the MS Outlook application and selecting the “View Messages in Word” option.  The Outlook app invokes the Word app, sends it data, and sends it instructions on what to do.
  • asynchronous processing and communications – in an asynchronous process, components can work independently of each other.  For software applications, this usually involves some optimization of logic for queuing up multithreaded workloads and handling results.  For communications I/O (whether disk, memory, network, etc) this usually starts with increasing available channels so transmit and receive operations can occur simultaneously without collisions; next would be optimizing the distribution of activity across available channels.

Advancements in hardware technologies provided software engineers many reasons to rewrite and update their products.  Marketing’s demands for “IP Enabled” fit in nicely with these other priorities.  The engineers were also enamored with Internet communications and liked the idea of supporting fewer communications protocols.

  • At the time I met a lot of software engineers who had little or no idea of the size and complexity of the “TCP/IP Suite” which already existed in the mid 90s.  Even fewer could foresee the explosion of “protocol enhancements” which would follow.  Of the software engineers who actually create protocol implementations, many happily left IPX, SNA, NetBios, AppleTalk, and others in the dustbin of history… but I doubt you’ll find many who’d say life has gotten simpler since then.
  • IANA maintains a port numbering scheme for TCP and UDP protocols (mostly those which have been recognized thru the IETF RFC process).  At this time there are about 1,200 TCP/UDP protocols identified by the IANA registry.  Even within an “IP only” environment, this # is just a subset of the protocols available in the seven layer OSI stack.

What started for many product engineers (software and hardware) as an effort to make products compatible with IP soon became a very public optimization contest for vendors and their marketing organizations.

The race resulted in making IP the default protocol for inter-process communications and even intra-process communications… the birth of the “IP Enabled Operating System Kernel”.

The IP Enabled Kernel actually has two key characteristics.  Some instances may only have one of these, but many now have both.

The first characteristic could be describe as “optimization by inclusion”… or you might call it “kitchen sink compiling”.  Many of the networking functions which were previous performed by software modules external to the kernel were compiled into the kernel’s source code.  By doing this, the kernel and networking feature share the same physical memory space.  When the network function lived in a separate process, the kernel would need to physically copy data out to a new memory location which the network function could access.  When the two are combined, they can pass pointers to physical memory.  The result is a dramatic speed increase and reduction in I/O.

Image the user wants to send a local file from disk to a network location, but is using a computer system where everything is strictly separated into different application processes.  The System Kernel is in process #0.  The user is currently running application process #1.  The user request causes a file manager to be invoked in process #2.  And a network stack needs to be invoked in process #3.

In a well designed / optimized system, the file could be read directly from disk to the buffers of the network interface.  Proper process boundaries and virtual memory address management provide by the Kernel would prevent the User App, File Manager App, and Network App from knowing anything about each other or the Kernel… and the user’s request would be performed with a minimum of system resources.

Unfortunately, most systems today still aren’t that well designed.  A more common result was for the I/O to occur multiple times as the data traversed the various processes.  Or in even worse circumstances, the physical memory pointers were passed to all of  processes interested in this information and a bug in one would bring everything to a crash.

The “optimization by inclusion” approach has resulted in many of these functions being compiled into the OS Kernel [or into DLLs which are loaded into the kernel as the systems boots… with pretty much the same run time result].

The second characteristic of the IP Enabled Kernel could be described as “process optimization”.  This approach does several things:

  • organizes the application (process) logic as close to the OSI Layers as practical
  • arranges I/O and data chunks into sizes and patterns which are optimized for encapsulation within IP packets.  Network IP Interfaces have a setting called MTU (Maximum Transmission Unit).  If a Kernel process is handling some data which might eventually be sent to a network interface, passing that data around in chunks which fit perfectly into the MTU would be a potential optimization.
  • prefers and implements IP Protocols for inter- (and sometimes even intra- ) process communications.  This is one of the uses for the Loopback Address of 127.0.0.1

Over time, IP capabilities were made native to more and more OS components, system libraries, and app frameworks.

Today we’ve reach a point where vendors are trying to IP Enable our kitchen appliances.

* Actually some of the vendors tried this in the 90s, but the Utilities ignored them and many of the rest of us laughed at them.  Today the vendors are trying it again and people are starting to buy into it.  In some cases utilities have deployed smart grid products which unintentionally introduced IP capabilities on what were thought to be private, non-IP networks (both wireless and wireless).  NERC has begun intervening and requiring stricter technologies standards and security procedures for the utility industry.

I believe we’ve already went to far.

I’m not a security by obscurity fan who wants some mysterious black box kernel setting at the heart of my technology products.

Nor am I some sort of closet luddite who wants to shut down the internet.  I like shopping online, using electronic bill pay with automatic bookkeeping and no stamp licking, and digital media.

But I do think it’s time we seriously consider going to back to core components which don’t have native Internet capability.  Technology has reached the point where the potential workload from using multi-protocol gateway applications no longer presents a performance problem.

Firewalls and anti-malware tools have become de facto system requirements for everything.  IE., we’re already running the workload attempting to monitor IP-to-IP communications.  If we stopped allowing every little app, gadget, widget, process, and thread access to every feature of the IP Stack known to man, we could actually reduce the Firewall/Anti-Malware workload on our systems and achieve a higher level of confidence in monitoring being effective.

Memory virtualization and address randomization have evolved to the point where I/O can be optimized while still preventing processes which share data from knowing about each other or interfering with each other.

There’s no reason for an application to have Internet communications capability without expressively asking permission to load and utilize an appropriate framework.  At application run time the user / device owner should have the option of denying the application that capability when desired.

Security issues would improve with systems which:

  • Move network interfaces and protocols out of the OS Kernel.
    • Use a non-kernel process to access network interfaces.
    • Use non-IP protocols for inter- and intra- process communications.  When a process (even a kernel process) needs network services, require it to request permissions and translation services thru a non-root gateway.
  • The entire network communication stack should be moved to a “multi-protocol router and stateful application firewall service” running under a non root account.
    • One place to enable/disable communication services.
    • One place to monitor communications services.
    • But not an all-or-nothing architecture.  It should be easy to control which protocols are enabled or disabled.  Same with apps.  And same with inter-process/service communication.
  • These aren’t concepts which require a lot of “start-from-scratch” efforts to realize.  The application logic already exists.  We created the DEN and CIM specifications back in the late 90s specifically to provide an industry standard way of managing relationships between people, devices, applications, and services.  
  • In high security environments, this architecture is the required default.  It’s usually achieved thru a combination of OS Hardening and 3rd party security products.  The hardening process removes unnecessary packages from the system, restricts communications capabilities to specific services, and forces communications to pass thru the 3rd party security product for evaluation.
  • Bluetooth devices are for personal area networks.  They don’t need a publicly routable IP Enabled network stack.
    • Nor do my USB, Firewire, Audio and HDMI interfaces!
  • Start applications in a ‘least privilege’ mode and allow the user / device owner to approve activation of features.  If the app doesn’t work, or fail gracefully, in least privilege mode it shouldn’t pass QA.  [And the operating system shouldn’t let it run without a user override.]
  • The Apple iOS Privacy Settings panel demonstrates a good concept, that could be improved.  Important services, features, etc., which have privacy/security concerns should be isolated to specific Libraries and Frameworks.  Operating systems should provide users / device owners a mechanism to enable or disable entire frameworks as they choose.
    • Organizations with high security requirements have been playing whack-a-mole with Mobile Device Vendors over features like cameras, microphones, location tracking, and more.  While some organizations have had small successes getting policy management points built into Mobile Device Manager (MDM) products and Mobile Operating Systems… consumers have been left with little to no idea what their devices are doing or capable of doing.
    • New features should be linked to a framework and privacy control mechanism before the features GA release.

These issues don’t apply just to Smartphones, laptops, and other typical IT products.  These issues are just as important for automobiles, appliances, electronic healthcare products, home automation products, industrial robots, the emerging market of home assistive / personal robotics products, and any other new fangled gadgets coming along with abilities to store, process, or communicate information.

Some time ago, a TED talk described a “moral operating system”.  The speaker was describing the need for a system of morality for people… but I tend to take things literally, and kept returning to the idea of, “how could improve computer operating systems to facilitate these ideas?”

The obvious first step has been know for years.  Design systems so the default choice is typically the better choice.

Another requirement for this new operating system.  It needs to begin with the principle that everything on the disk / storage media belongs to the user or device owner.  It’s my information and I have a right to see it when I want to look at it.  It’s my information and I have a right to monitor which applications or processes have been accessing or  modifying it.  And I have a right to restrict which applications or processes can access my information on the disk or storage media.

I’m not daft.  I realize DRM isn’t going away anytime soon.  And I’m not here to argue over which DRM system, if any, is better than the other.  I believe an inherently secure, user-centric operating system can still accommodate a DRM’d service by:

  • giving me the choice to delegate  control of a storage location and control of an application sub process to the DRM service.
  • the delegated storage location could be an external media device I choose to dedicate to the service or, more likely, be an encrypted sparse disk image I choose to allow the service to create (at a file location of my choosing).
  • the delegated application “sub process” would likely be some sort of “certificate management” utility which kept the keys to the delegated storage location.
  • so long as I permit the “sub process” to run and don’t tamper with it, it would be able to verify it’s code signature and verify it’s certificates to provide sufficient assurance to the DRM’d content provider I’m following the terms of our agreement.
  • The DRM service should have absolutely no reach or influence within my computer system beyonds it’s application sandbox, it’s delegated sub process, and it’s delegated storage process.
  • If I wish to stop or delete the service, it should be as simple as exiting or deleting the application.  The only negative consequence should be loosing the ability to read the contents of the encrypted delegated storage area.  Deleting that storage remains my decision, and so does the option of re-installing the App to restore access to the DRM’d media.

In addition to the Virtual Memory Addressing, Memory Address Randomization, and Memory Encryption architectures which have been implemented for computer RAM… I’d also like to see similar architectural changes for how applications are allowed to interact with the file system.

For example, some features might include:

  • Restrict sandboxed applications to a virtual file system using encryption and address randomization instead of allowing the application access to any part of the real file system.
  • Give the user controls to provide an application with access to “the file system framework” so it can interact with things outside it’s sandbox.  Include some granular choices such as file, directory, or “other app’s data”… with standard file permission options still available also.
  • Just as it may be reasonable to expect an application to ask permission to use a NetworkFramework to communicate outside of it’s sandbox, it should also be reasonable for an application to need permissions for a FileSystemFramework before interacting with data/media outside of it’s sandbox.

Again, to summarize in short form, these few key changes could improve the inherent security of many computer / electronic products:

  • Take the network interfaces out of the OS Kernel.
  • Take the network protocols out of the OS Kernel.
  • Direct all network communications thru a non-root multi-protocol router and stateful packet inspection service.
  • Wrap major product features in a system framework and give the user control over whether that framework is accessible on their device, and by which apps/services if they choose to enable.
  • Always start things in least privilege mode until the owner approves more access.
  • Always start from a place which acknowledges the user’s ownership of information and preserve the user’s ownership and rights.
    • Only the owner can choose to delegate control [the operating system and 3rd party applications cannot arbitrarily grant themselves control over the user’s information].
    • Only the owner can choose to provide access to information.
    • And, only the owner can choose to disclose information.

In real, day to day terms… these architectural changes would not require large shifts in the way most developers and engineer go about building their products.  Very few software engineers actually write protocol stacks, kernels, or system frameworks.  For everyone else writing software, the difference between including a framework in your application vs “getting it for free” from the operating system can be as simple as a checkbox or a “# Include”.

The biggest effort, and most important work, is for the kernel and framework developers to adopt architectures which default to inherently safe security configurations and give users control over whether frameworks/features are enabled.

The Linux and Unix communities already have secure OS implementations which achieve some of these goals.  Apple, Oracle, Redhat, and Novell all share some responsibilities for completing the architecture and making it standard in their products.  Microsoft probably has the most baggage to overcome.

Many others in the IT Industry also share responsibilities in making these sort of changes.  Nokia, Siemens, Samsung, Blackberry, Google/Motorola, HP, IBM, and Cisco all need to step up.

Some, such as Symantec, stand to loose some market share if OS Vendors finally step up and fulfill their responsibilities.

Intel, AMD, Motorola, Qualcomm, and TI all have a stake in this as well. Intel is easily in the leadership position right now, since their acquisition of McAfee was explained as being done for the express purpose of introducing more security capabilities directly into the CPU and reducing the need for complex 3rd party products to be loaded by the customer after the system purchase.

Listing the CPU manufacturers brings me to my final point for the security architecture recommendations.  To some extent, this one is mostly on the CPU makers, but coordinating with the OS makers will help.

Enough with the kitchen sink “system on chip” approach.  Yeah, it’s a great idea.  But overdoing it is like combing a Super-Walmart, a Cabelas, a college dorm complex, a hospital, and super-max prison all into a Seven Eleven.  Who decided the most trusted processes and the least trusted processes should run on the same chip?  The $99 smartphone of today provides as much or more processing capability as $200,000 systems available at the time many of these architectural decisions were made.  Inertia has kept us on course.  It’s time to reconsider old design decisions.

After decades of watching more capabilities be combined into a single chip, I’m no longer convinced it’s the great idea it started out to be.  Keep making things more energy efficient, smaller, lighter, and faster.  But consider backing off the physical co-mingling of chip capabilities.  Consider fencing some of these untrusted communication services off to a component chip and working with the Secure OS makers to build good gateway processes and frameworks for controlling the flow of data.

As for multi-core CPUs… in consumer devices, I still haven’t seen many examples of workloads (applications) which can properly utilize four or more cores… and I’ve seen even fewer examples of consumer workloads which actually need to do so for more than a fraction of a second.  Very few consumers run video rendering processes, and even fewer run multiple virtual machines in a continuous build development process.

On the other hand, I believe a currently underdeveloped chip feature which could provide immediate benefits to consumer and business markets combines secure I/O and secure storage.  In our laptops and smartphones, and other similar devices, our media libraries (video, audio, photos) have grown large, but much of our critical personal information fits within just a few megabytes up to a couple GB for those who have been paperless longer.  CPU and OS makers should look at implementing physical and logical pathways dedicated to providing the user with a secure data vault.  It could utilize any number of different implementation strategies.  Some Flash on the motherboard, a CPU pathway available to a specific USB/MicroSD slot or to an optional region/address within an SSD, or something we haven’t even thought of yet.  Whatever it ends up to be, just make sure it provides the user with a means of vaulting relatively small amounts of critical data away from their primary storage disk in some way that allowing an app generic/general file system access would still be physically and logically isolated from the vault.  The best explanation of my interest here may be Keychain on steroids… put the data at a location physically separate from the regular storage disk, use a different file system, different encryption protocol and key, implement a coordinated CPU and OS architecture which requires all access to the vault be shunted thru specific/dedicated frameworks and gateway services (ie., not directly accessible to regular OS and App process).

Personally, I have three applications which I use in a “data vault” fashion and would benefit from this architecture.  I only run them occasionally, when I need them.  They already have extra security controls invoked with launching the apps.  Placing the app data into a secure vault protected by physical and logical separations including a security framework and control gateway would both improve and simply the scenario I currently use. There is a market for this kind of functionality.  If there wasn’t, products like 1Password and RSA SecureID would not exist.

In product areas outside traditional tech markets, vendors are already running into challenges to the System-On-a-Chip (SOC) trend. NERC recently began requiring smart-grid device makers producing residential smart meters to separate functionality into at least three physically separate portions within the device.  One trusted portion of the devices are required to undergo extensive certification testing (every version of hardware and everyone version of software, even minor updates).  This portion would be allowed to communicate with utility grid control systems.  A second, less trusted, and optional, portion could be implementing for local maintenance access (local/downstream only, no grid/upstream access).  A third, and mostly untrusted, portion is provided for consumer facing services which have the high risk issues of being available to the consumers home network and also getting frequent software updates as consumer features are continuously developed.  Overuse of SOC architectures compromised the entire smart grid.  This mandatory chip/feature segregation is critical to the utility industry and provides benefits none of the other proposed architectures can match.

Automotive, aviation, and many other industry segments have similar requirements for architectural physical separation of features.  We need to recognize the value of dis-integration in consumer products as well.

It’s almost ironic that much of the IT industry has been loudly espousing the benefits of loose coupling and dis-integration for a decade or more.  But yet most of the industry overlooked the increasingly tight coupling between Operating Systems and Network Stacks.

Adopting a new operating system always involves a learning curve.  But I look forward to learning a new modern and inherently secure OS that doesn’t have built in IP support.

And if anyone expects to sell me a self driving vehicle or a personal robot (next year or 20 years from now), consider this early notice of my #1 priority.  An inherently secure design.

In fact, for cars and robots… the world would probably be better off if the communications capabilities were removed to physically separate chips and I/O pathways between the CPU and CommChips controlled by physical switches or keys.  Turning off the switch or removing a key should permit the device to otherwise operate normally… just prevent it from getting new instructions from the neighbor kid while we’re sleeping or gone fishing.

Installer quit unexpectedly… root cause missing JavaLaunching.framework

Error logs indicate the system needs Java for the package installer to work properly.  As verified and documented below, it does not; it only needs a “framework folder” containing some reference information and will work just fine without a JRE, JVM, JDK, or any other part of Java.

This applies to (ie., verified on) 2013-03-31:  OS X Mountain Lion 10.8.3  
Given the age/history of the Installer utility, this issue probably applies to many other versions of OS X.

Attempts to install anything from a *.pkg results in “Installer quit unexpectedly.” error message.

Below is a snippet taken from multiple error logs resulting from trying to install different packages (two 3rd party apps, two Apple developer downloads, one OS X update, and one Oracle Java update).  All six pkg installations failed with error logs containing these same lines.

The problem… a few weeks ago I scrubbed the system of all traces of Java that I could find.  In light of all the recent Java exploit news, I wanted to test how complicated it would be to completely purge Java from the system; and I wanted to see if doing so would bring any unexpected consequences.  Well, removing all traces is complicated (and difficult to verify with certainty).  Although everything on the system continued to work fine, at first, there are unexpected dependencies… such as Apple referencing a Java Framework within their package installer.

Given the amount of “stuff” Apple OS X inherited from Sun Solaris over the years, this shouldn’t come as too big a surprise.  Sun used to regularly hard code Java Library dependencies into products which didn’t actually need them.  While the text of the error messages are different (but not by much), this is the exact behavior encountered installing certain Sun applications from the command line on headless servers… i.e.., the server had absolutely no reason to load a graphical environment, but the software installer was hard coded to look for the Java GUI packages.  For environments where security policies forbid unnecessary packages, we’d isolate the server, load the extra packages, complete the installation, and the remove the unnecessary packages… after the initial install/config was worked out, the organization would use DR processes to build new instances (eliminating the need to keep fiddling with those GUI packages).

ByTheWay, command line “Installer -vers” outputs “… v. 1.5.0 Copywright (c) 1999-2006 … ” That does match up with the time frame when code was being merged into OS X 10.4 from Solaris 10.

The fix… well, four of the above package installation attempts were attempts at fixing.  This is why hard coding dependencies is a bad idea; attempting to install a package that would solve the problem requires the referenced package, and fails.  After some research (and documentation of activity thus far), I tried another command line

         sudo installer -pkg /Volumes/OS\ X\ 10.8.3\ Update\ Combo/OSXUpdCombo10.8.3.pkg -target /

oops… got ahead of myself there.  That one may have been like swatting flies with a canon and I’d intended to try various command line options on the java packages first.  It’s done and requesting system restart now.  Didn’t matter, it didn’t solve the problem.  The framework was not restored and normal package installations are still failing.

Next I tried some other command line combinations and also tried extracting the packages to see if the frameworks could be manually located.  No joy with either approach.

Running out of options, but before I tried any of the operating system recovery / re-installation choices, I’ll try restoring the framework from TimeMachine.

Looking thru my backups, I found \System\Library\PrivateFrameworks\JavaLaunching.framework (and \JavaApplicationLauncher.framework) from a few weeks ago and restored the folder to it’s original location.

Result… restoring just 375KB of framework folders fixed the problem and the OS X package installer is working again.  It was not necessary to restore/install a JRE or JVM or any other part of Java.  It just needed required folder containing references to a Java framework.  Installer doesn’t need Java and it doesn’t use Java, but it was hard coded to require the presense of a symbolic reference.

 

Moral of the story…

  • Java – annoying.
  • hard coding artificial dependencies – shouldn’t be allowed to escape unit testing let alone make it into GA production release software.
  • backups which work and provide options for partial restores – everyone should have them.
  • Even simple steps for hardening a consumer operating system quickly become complicated, but can usually be resolved without to much fuss.
  • Code templates with a lot of “# Includes” may be convenient for developers but often
    1. present a headache for users – by creating required dependencies which should have been optional.
    2. introduce vulnerabilities – by requiring components which aren’t necessary in the target deployment environment.
    3. present long term maintenance problems – by making an entire application dependent on something which should have been an optional feature.
The problem report includes these lines:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Application Specific Information:
dyld: launch, loading dependent libraries
Dyld Error Message:
  Library not loaded: /System/Library/PrivateFrameworks/JavaLaunching.framework/Versions/A/JavaLaunching
  Referenced from: /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer
  Reason: image not found
Binary Images:
       #x######### – #x######### com.apple.installer (6.0 – 614) <3E180768-4C29-3B0D-A47D-F4A23760F824> /System/Library/CoreServices/Installer.app/Contents/MacOS/Installer
       #x######### – #x######### com.apple.GraphKit (1.0.5 – 30) <5ECA4744-FFA8-3CF0-BC20-3B2AD16AD93C> /System/Library/PrivateFrameworks/GraphKit.framework/Versions/A/GraphKit
       #x######### – #x######### com.apple.securityinterface (6.0 – 55024.4) <FCF87CA0-CDC1-3F7C-AADA-2AC3FE4E97BD> /System/Library/Frameworks/SecurityInterface.framework/Versions/A/SecurityInterface
    #x######### – #x######### dyld (210.2.3) <A40597AA-5529-3337-8C09-D8A014EB1578> /usr/lib/dyld
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Fix repeating iTunes prompt to accept incoming network connections.

Do you want the application iTunes.app to accept incoming network connectionsDo you want the application ‘iTunes.app’ to accept incoming network connections?

If you get this prompt every time you start iTunes, it’s probably an issue with the application contents vs the application’s code signature.  Users seem to be encountering this sort of problem more frequently since Apple’s introduction of additional code signing, sandboxing, and GateKeeper functions in Mountain Lion.

Normally, this terminal command:

$ codesign -vvv /Applications/iTunes.app

should result in this:

/Applications/iTunes.app: valid on disk
/Applications/iTunes.app: satisfies its Designated Requirement

If not, then the application package contents are probably mucked up.  I recently encountered this situation as the command results showed a lot of extra files in the package.  Probably leftovers from an update.

Most of the recommending fixes involve deleting iTunes and reinstalling from a fresh download.  However, Mountain Lion won’t let you delete iTunes… say’s “can’t be modified or deleted because it’s required by Mac OS X.”

Some folk have had success by simply running the installer anyway.  But in my case, the extra files weren’t removed.  Instead I found I could right click on the /Applications/iTunes.app package, and “Show Package Contents“.  Once inside the package, I could delete the contents.  I simply deleted the entire “contents” directory, and then installed iTunes using a new download from the website.  The terminal command “codesign” then generated the correct results and the Firewall prompts stopped repeating.

 

FYI. This occurred after a clean re-install of OS X 10.8.2 Mountain Lion on a 2012 MacBook Pro.  I did a re-install to clean up problems from a number of app testing sessions and restore things to clean settings.  After running available software updates from the App Store the iTunes / Firewall prompts began occurring each time I started iTunes.  Most likely there was some issue with the App Store update for iTunes not cleaning up old files correctly.

OSX: Unable to talk to lsboxd ( and other console errors)

mdworker: Unable to talk to lsboxd

sandboxd: mdworker deny mach-lookup com.apple.ls.boxd

 

Cured by doing a safe boot and then restarting normally ?  Really ?  < cue twilight zone music > 

 

Somewhere along the way, Mountain Lion started getting oodles of these errors in the console log… usually every 4 minutes or so.  It’s been common enough that there are posts about it on quite a few sites.  Surprisingly, the fix really is a simple as a safe boot.  Apparently safe boot cleans out some cache files which were causing a problem.

 

Reduce size of guest vmdk disks with VMware Fusion 4.1.3 on OS X 10.8

When running multiple VMs, and keeping backup copies of various configs, a considerable amount of disk space can be used quickly.  The following steps have been confirmed to reduce disk usage for several virtual machine guest operating systems.

  • Oracle Linux R6 U3 64-bit
  • centOS 6.3 64-bit
  • Ubuntu 12.04 LPS 64-bit
  • openSUSE 12.1 64-bit
  • Mac OS X (multiple versions)

Oracle Linux R6 U3 and CentOS 6.3:  *these steps utilize a desktop environment and VMtools.

  1. remove any unneeded apps/packages, files, etc and empty the trash.
  2. clean up the YUM package files with (terminal commands as root):
    1. yum clean packages
    2. yum clean metadata
    3. yum clean dbcache
    4. (or) yum clean all
  3. at the command line, type “vmware-toolbox” to launch the VMware Tools GUI within the guest VM.  This is equivalent to the GUI available within Windows guests.
  4. Select the drive (partition) to Shrink.  1st the utility will prepare the drive for the shrink process and then a final dialog box will be presented to begin the shrink drive operation.

Ubuntu 12.04 LPS:  same as described for centOS and Oracle Linux, except the YUM commands are replaced with:

    1. sudo apt-get autoclean
    2. (or) sudo apt-get clean
    3. (0r) sudo apt-get autoremove

*note: Ubuntu 12.04 utilizes the Ubuntu Software Center for GUI application management (and has the annoying characteristic of only working with one selection at a time); installing the “Synaptic” package manager provides a more traditional Linux package manager.

openSUSE 12.1:  same as described for centOS and Oracle Linux, except YAST handles the package and cache cleanups (instead of yum or apt-get).  Options are available within the YAST GUI.

OS X:  10.6 Snow Leopard, 10.7 Lion, and 10.8 Mountain Lion (including servers).

  1. remove any unneeded apps, files, etc and empty the trash.
  2. using Finder, navigate to the following folders and remove unneeded fonts and dictionary files for languages you’re certain you won’t need for this VM.  Sort the folder contents by size and select the largest.  You can verify font files by opening them in the “font book” app to preview.
    1. /System/Library/Fonts/
    2. /Library/Dictionaries/
    3. /Library/Fonts/
    4. note: sometimes the system will state a font is in use and need a restart before allowing all of the deleted fonts to be emptied from the trash.
  3.  use the utility Monolingualto remove unneeded Architectures, Input Types, and Languages from OS X and installed application packages.
    1. If you know you have an app which needs to be excluded, use the Monolingual “Preference” to add the app’s location to a list of excluded directories.
    2. in the main app, use the “Languages” tab to select which languages to remove (be sure to scroll the entire list and de-select any you wish to keep).  On a fresh install of OS X 10.8 Mountain Lion, selecting all but English, French, and Spanish removed about 1.6GB
    3. use the “Input Menu” tab to select what to remove.
    4. use the “Architectures” tab to select what to remove.
    5. note: Monolingual only removes the items in the visible tab; if you desire to remove items from all three tabs, you’ll need to run it three times.
  4. use the disk utility app (within the VM) to erase free space on the disk.
  5. close the VM and exit VMware Fusion
  6. use the vmware-vdiskmanager utility to shrink the VMDK.
    1. open Finder, browse to the stored VM, right click and show package contents, locate the file “your-vm-name-here.vmdk”.
    2. open Terminal and CD to “/Applications/VMware\ Fusion/Contents/Library/”
    3. type “./vmware-vdiskmanager -k “
    4. drag the VMDK file from Finder to Terminal (this will append the file path and name to the command.
    5. In terminal, enter the command to shrink the vmdk.

Install CentOS 6.3 64-bit Linux in VMware Fusion 4.1.3

As part of my iOS app development and testing lab, I have a need to be able to test client applications against multi-platform database services.  Last year I determined a collection of OS X, CentOS, and openSUSE virtual machines running MySQL and PostgreSQL provides an adequately diverse test environment for my needs.

A wide variety of application prototyping and testing needs can be served by these combinations without requiring a rack of high end hardware and a couple full time DBAs to maintain everything.  Many of my clients have performance testing and production requirements far beyond my little “proof of concept” setup.  However, my “proof of concept” environment often helps me better understand how to communicate with the DBAs in the large organizations.  And sometimes it allows testing ideas that they don’t have the luxury of trying out on a $25 Million production database cluster.

My virtual lab environment had grown a bit stale over the past year.  Over the past week or so, I’ve been updating to OS X 10.8 Mountain Lion and both the Xcode 4.4 general release and the Xcode 4.5 iOS 6 betas.  Now I’m beginning to update the Linux and SQL components of the environment.  I’ve had a long affinity for Suse Linux so I like to keep a familiar distro on hand.  Many clients are using Redhat in their production environments, so CentOS has become a necessity.  In the past, Solaris was always a key component of my setups but not so much any more; adding some new Solaris VMs will be deferred for another time.

For this portion of the lab update I’ll be building a couple new CentOS VMs and keeping some notes.  I’ll begin with the CentOS 6.3 x86-64 bit “netinstall.iso“.  Assuming you’re install to location with internet connectivity (and not organizationally firewalled into using sneakernet for your lab), the netinstall.iso option saves the time otherwise spent updating all of the packages in the LiveCD or full ISO images.

In VMware Fusion 4.1.3,

  • select the menu options “File” and “New” to get the “Create New Virtual Machine” dialog window.
  • select “Continue without Disk
  • select “Choose a disk or disk image…
  • use the presented Finder popup to navigate to your target ISO image (which you’ve previously downloaded) and select “Continue“.
  • select Operating System: “Linux
  • select Version: “CentOS 64-bit
  • select “Continue”  note: the OS and Version selections are important as they inform VMware Fusion which drivers, VMtools, and VM configuration settings to utilize.  VMs can successfully be created using less specific settings, by you’d lose out on some features of Fusion and likely have to perform additional manually configuration work within your Linux VM.
  • you should be presented with a summary configuration of your new VM with the options to “Customize Settings” or “Finish“.  This default will likely be one processor core and 1GB memory; I recommended increasing this to two cores and 2GB memory.  After completing the installation and configuration, you might try lowering the settings but these will be helpful for getting thru the various package installations and configurations.
  • select “Finish” and use the Finder popup to name and save your new VM.  I like to configure a “base image” to my preferences and then make copies of as needed for testing new configurations or loading additional packages.  So it’s helpful to think of a naming convention if you are likely to have multiple copies over time.
  • Fusion will start the new VM and the netinstall.iso will boot to a setup process.  Netinstall will be a text based interface (use your keyboard arrows keys to move between options).  The first dialog will be for testing the installation media.  I’ll “Skip” the media test.  If you uncertain about where you image came from or the quality of your internet connection, you may want to let the me test proceed.
  • choose a language
  • choose a keyboard type
  • choose an installation method.  select “URL“. (you’ll be prompted for details later).
  • configure TCP/IP.  unless you need to change, accept the defaults by selecting “OK“.
  • a dialog will display “waiting for network manager to configure eth0
  • URL setup.  enter “http://mirror.centos.org/centos/6.3/os/x86_64&#8243;.  The text interface does not allow copy/paste from the host, so you will need to type this in exactly.  cents.org redirectors the download to one of many mirror sites.  If the URL doesn’t work for you, check your typing and try again.  It’s possible the redirection could get sent to a server that is temporarily busy or offline.  Trying again usually works.  If not, you’ll need to do some searching to locate a direct URL to mirror server that is reachable from your network location.
  • After the netinstall process begins, in a few moments you’ll see a graphical screen displaying a CentOS 6 logo.  Select “Next“.
  • Basic storage device should be ok. Select “Next“.
  • Storage Device Warning.  This is a fresh install, so select “Yes, discard any data“.
  • local hostname:  Enter a hostname for your VM.
  • select a timezone.
  • enter a root password (twice to confirm, must be at least six characters).
  • which type of installation would you like?  select “use all space“.
  • write changes to disk
  • select optional software to install.  Note:  Selecting software packages is a lot easier if you wait until the system is up and running with VMtools providing proper mouse and video drivers plus the ability to select the various package repositories you’ll want to use.  So, for this step,  select “Minimal Desktop” and “Next“.  If you choose the “Minimal” option, you’ll be limited to the command line.
  • The necessary packages will be downloaded and installed (about 30 minutes of this older Core 2 Duo MacMini).  When it’s complete, you’ll be prompted to “Reboot“.
  • After the reboot, a Welcome screen will continue the process of setting up the new system.  Select “Forward“.
  • Agree to the license and select “Forward“.
  • Create User: input your desired user information. Select “Forward“.
  • Set Date and Time. Select “Forward“.
  • At this point I get a warning message “Insufficient memory to auto-enable dump. …” That’s ok, I don’t need it for this usage, so I’ll select “Ok” and “Finish“.  The VM will reboot to complete the setup.
  • After the reboot, a GUI login screen will prompt to login with the account just created in the previous steps and delivery you to the new desktop.

At this point the new VM is is ready to use with a base configuration of the “Minimal Desktop” distribution of CentOS v6.3.  However, there are some additional steps to make it bit more user friendly prior to archiving a copy and proceeding with the desired dev / test work this VM is intended for.

  • Use the VMware Fusion menu to select “Virtual Machine | Install VMware Tools“.  If you’ve not used previously used this feature in your current version / installation of VMware Fusion, you’ll be prompted that “VMware Fusion needs to download the following component: VMware Tools for Linux“.  Select “Download“.
  • VMware Fusion will be adding an additional component to the Fusion application on your Mac OS X host, so you will be prompted to authenticate and permit this action.
  • Next you’ll be prompted by Fusion to “Click Install to connect the VMware Tools installer CD to this virtual machine“.
  • This should result in the CentOS VM’s desktop displaying a DVD (or CD) icon titled “VMware Tools”.  Unfortunately, mine displayed a blank folder with an empty disk as a result.
    • Checking /Applications/VMware Fusion.app/Contents/Library/isoimages” confirmed that a “linux.iso” file was present (dated 2012-05-27).
    • Rebooting the VM and re-trying the VMtools installation still resulted in an empty disc image / folder.  This is a common problem between Fusion and many Linux distributions.  VMware’s support forums offer several work arounds, most of them at the command line.
  • My solution is to use the OS X Finder to browse the “VMware Fusion.app” package contents, copy the “linux.iso” to another folder, and mount it to the VM’s CD drive.
  • Return to the CentOS desktop, use “Computer” to browse the CD.  You should now see a “VMware-Tools……tar.gz” file.
  • Drag the “….tar.gz” file to your home folder.  Don’t bother trying right click and select “Open with archive mounter”. Extracting the files through the GUI will probably result in a process that estimates a couple hours to complete.
  • Use the CentOS “Applications” menu to launch “Terminal“.
  • “CD” to your home folder.
  • Use the “ls” command to verify the “…tar.gz” file is there.
  • Expand the archive using “tar zxpf VMwareTools-….tar.gz”  HINT: type “tar zxpf VMw” and hit “Tab” to autocomplete the command.
    • This should result in a new folder named “vmware-tools-distrib” containing 3,275 items for 178.6MB.
  • In terminal, type “CD vm” and hit “Tab” (to autocomplete).
  • Another “ls” command should verify the presence of “vmware-install.pl”.
  • You’ll need super user (root) privileges to run this script.  Type “su” and then enter the root password established during installation.
  • Enter “./vmware-install.pl” (or just type “./v” followed with a tab key to autocomplete).
  • The script will prompt with about nine questions.  Use “Enter” to accept the defaults for each.
  • When the script completes you can delete the “…tar.gz” from the VM to save diskspace.  In all likelihood, if you ever need them again for this specific VM, they’ll be out of date by then.  Reboot the VM to activate the VMware Tools features.

Now that VMware Tools is active the mouse should work much better, and you’ll be able to resize the VM window to whatever fits on your available host machines OS X desktop the best for your preferences. Copy/paste from the host machine should be enable.

VMware Fusion shared folders should also be working now.  However, you should verify as this is another feature where Fusion yields different results across various Linux distributions.  On this particular CentOS VM, sharing some folders from the host machine resulted in them be available within CentOS at the path “/mnt/hgfs/”.  Fortunately it wasn’t necessary to perform any additional commands to use them.  A quick test confirmed the shared path was readable and writeable from the VM.  note: this feature mounts the shared folders with the guest VM as virtual file system, there isn’t any shared/virtual networking going on with this feature.  

The next step I recommend is selecting the Applications menu “System | Software Update“.  Despite having just completed a network installation, this new instance of CentOS Minimal Desktop config had 43 available updates (124.6MB).  The update process will prompt for the root password.  You will also likely be prompted to authenticate to accept certificates, signatures, and various packages during the update process (so it’s not a walk away and leave it process).

Now that the base config is installed and updated, I’ll shut down the VM and make a Zip Archive (using OS X Finder) of it’s VM image.

It was about 2.5 hours to get this far.  A quad core host machine with SSD, and a faster internet connection, would reduce that considerably.  Some of the time was also spent writing these notes.

With this new configuration built and a backup tucked away, I probably won’t need to perform a base install of CentOS in this environment for another year.  I didn’t keep as much detail last time, so I’ll have to wait another year to compare whether things get faster.

My next steps for CentOS will be to configure the various application packages and settings that I need (and make another Zip Archive backup).  From there it is much faster to deploy additional instances for dev/test work whenever needed.

Install OS X 10.8 Mountain Lion in VMware Fusion 4.1.3 (clean installation, not upgrade)

Previously I’ve been using older OS X virtual images and upgrading them as needed.  Looking back through some notes about various utilities, features, and command line tools, I noticed something which I wanted to test on a clean installation.

Sometimes new versions of OS X remove some features, but upgrading over an old installation (instead of doing a clean install) may leave the old feature or utility in place and available.  In order to verify whether Mountain Lion removed some utilities previously available in Snow Leapoard and Lion, it seemed like a good time to test the process of creating a clean VM installation from the App Store downloaded installation file for OS X 10.8 Mountain Lion.

Within VMware Fusion 4.1.3 (the latest release version at this time):

  • make sure you have an available copy of the App Store download for Mountain Lion
  • create a new VM
  • select “Continue without disc
  • on the next screen “Choose a disc or disk image…” and navigate to “Install OS X Mountain Lion.app” file.  (When initially downloading this file, the App Store places it in your Applications Folder.  On most systems it gets automatically removed after the upgrade/installation is completed.  So prior to running it, it is a good idea to back up a copy to another location.  The copy can be used to upgrade other machines, or for creating VMs as we’re about to do now.)
  • select “Continue” on the next few screens to accept the defaults and start the VM.  The defaults should be adequate for most initial testing and I’d recommended them until you get more familiar or identify a specific need to customize settings further.
    • note: if you have a folder of application install files, it is helpful to configure Fusion’s “file sharing” options to present that folder within the VM when it is running.
  • At the end of the settings screens, select the “Finish” button.  Fusion will complete the configuration and start the VM.

The VM may seem to start very slowly before presenting the initial grey OS X booting screen.  And will likely take some time, perhaps several minutes unless you’re on an Ivy Bridge / SSD system (if you are on a 2012 Ivy Bridge CPU and encounter a CPU error from Fusion, this post explain how to modify your VM config to continue).

Eventually you should see the “OS X Utilities” window and have the option to select “Reinstall OS X“.  Select “Continue” a couple times, then “Agree” to the license (twice), select the hard drive “Macintosh HD” and “Install“.

The installation proceeds without an further interaction until it automatically reboots.  After the VM reboots, the “Install OS X” screen will appear and display a progress indicator.  On an older CoreDuo MacMini, it took about ten minutes to reach the reboot and then displays an estimate of about 20 minutes to complete the installation.  ActivityMonitor shows this old MacMini is CPU constrained, but I’m waiting for next refresh to include Ivy Bridge and USB 3 before getting a new one.

When the installation process completes, the VM will reboot and present the “Welcome” screen to begin the initial configuration of OS X.  If audio is enabled, you should hear a voice welcoming you to setup.

  • select a Country
  • select a Keyboard Layout
  • choose whether to Transfer Information using Migration Assistant.  If you haven’t used it before, it can move information from other Macs, Windows PCs, or TimeMachine backups and works pretty well for a wide variety of application settings and user data.   Since the goal is testing a clean installation of OS X 10.8 Mountain Lion, I won’t be using Migration Assistant this time.  It can also be ran later, so it’s not critical to decided right now.
  • choose whether to enable “Location Services” (I’m leaving it disabled on this VM).
  • Enter you AppleID to setup App Store, iCloud, etc.  I have separate IDs for iTunes and iCloud, and don’t need either configured on this VM right now.  So I’m selecting “Skip” on this setup screen.
  • accept the “Terms and Conditions” (twice again).
  • fill in the fields for “Create Your Computer Account” to establish your username and password.
  • Select Your Time Zone” by clicking the map and selecting a nearby city.
  • Register“.  I’ll “Skip” this screen for this VM.
  • and finally you reach the “Thank You” screen and can “Start using your Mac“.

At this point you get the new OS X Mountain Lion (fortunately it does not inflict another reboot on you here) and you are ready to go.

Since my goal was to configure a cleanly installed VM for some testing, I’m going to stop here, shut down the VM, and make a Zip Archive of the VM’s file image for later re-use.  Once that housekeeping task is complete, then I’ll review my notes on previous versions and retest various utilities to see what still works.  Results will be documented in a follow up post.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Summary:  Using the “Install OS X Mountain Lion.app” file to create a new virtual machine in VMware Fusion 4.1.3 is quick and easy.

Xcode 4.2 – using source code branches and other Xcode shortcuts

command T — to open a new tab in Xcode.  Tabs can be pulled outside the main tab to create a multi-window environment.  These additional windows can then be pulled over to a second monitor.  The Xcode | Settings | Behavior pane can be used to setup window layouts to automatically launch when performing certain activities (like debugging, building, or testing).

Run multiple instances of your project simultaneously:  Xcode 4.2 supports launching multiple instances of an app (or multiple apps).  This helps with testing client-server scenarios and multi-user peer-to-peer apps.  You can mix and match simulator versions and physical device types, or just run multiple instances of the same version.

running Xcode from the command line:
$ /Xcode/usr/bin/clang -c myfile.h
$ /Xcode/usr/bin/clang++ -c another_file.cpp

Source control and multiple branches: From Organizer | Repositiories you can navigate into the Branches folder under a project, and the select the + Add Branch icon to create a new branch (additional options appear in popup dialog boxes).  From within your Xcode Workspace and the Editor, activate the “Show the Version Editor” icon (it’s in the upper right area of the toolbar).  Now you’ll have a navigator bar display at the bottom of each editor pane, and you can use this navigator bar to view items in various branches.

Note: navigating in the Version Editor does not switch your currently actively branch.  Be sure to use the Organizer’s Repository view to switch branches.  When clicking the Version Editor icon, it will initially display the currently active branch and version in the lower navigation bar (in case you need to quickly verify which branch you are working in).

To switch your active editing to a different branch, navigate back to the Organizer | Repositories and select the blue folder under your project repository (left side of screen).  From here, the lower area of the Repository view will display icons to Pull, Commit, or Switch Brach.  The upper right hand portion of the Repository view will display “Current Branch: yourBranchNameHere“.  If you switch branches while a workspace has that project open, the workspace window will refresh and load the selected branch’s files into the navigator.

Reconciling code from multiple branches:  When using the Version Editor, the navigation bar in the lower area of screen “appears” to allow you to look at two branches simultaneously.  At least for me (and my installation of Xcode 4.2.1), that doesn’t actually work.  The version editor is limited to viewing within the currently active branch.   Instead, when you need to compare across branches (or are ready to merge), it seems necessary use the Xcode | File | Source Control | Merge menu option.  This will launch you into a another popup window version of the version editor; however, it will only show the most recent version in each branch (the other version navigation features are disabled in the Merge view).

I’ll update these notes if I find a better way to compare versions across multiple branches or if this is updated/fixed in a subsequent release of Xcode.

Adding Perl modules to OS X Lion with Terminal and CPAN.

If you’ve installed the Apple developer tools for OS X Lion, then you should already have a working installation of Perl.  But you might find a need for some additional modules.

  • Visit CPAN and search for the name of the required module, such as “Net::Telnet::Cisco”.  If you have an existing Perl script, it probably identifies the requires modules near the top of the script.
  • Review that module’s information on CPAN to identify any prerequisite modules which need to be installed first, such as “Net::Telnet”.
  • open a Terminal session
  • to confirm Perl is installed, use:  mac$ perl -v
  • mac$ sudo su 
  • Password: AdminPassword
  • mac# cpan
  • cpan[n]> install ModuleName
  • cpan[n]> q       # ends the CPAN session
  • mac# exit        # exits sudo and returns Terminal session normal user permissions
Additional notes:
  • If you don’t “sudo su” first, the final steps of the module install process will fail.
  • If this is first time you’ve used “cpan”, you’ll be prompted with configuration options.  “automatic” should work just fine.  It will locate the URLs for the CPAN Mirror sites , complete the auto configuration, and provide the cpan prompt.

PS: after evaluating XCODE vs Komodo Edit v7, I found XCODE to be adequate for editing Perl.  Normally I would not make a case for using an IDE to edit a Perl script, but I’ve recently been asked to refactor a collection of large systems management scripts.  In this situation, using an IDE to organize the files and provide source control is imperative.

  • syntax highlighting:  XCODE was able to perform syntax highlighting equivalent to Komodo Edit.
  • code completion: XCODE offers completion of some basics items.  Plus it offers completion for anything you’ve already typed elsewhere in the project. Komodo Edit has full completion for some methods, but on others it only offered a hovering text box suggesting some syntax that you might use.
  • source control:  Komodo Edit has a menu option to “back up” a file to another location.  XCODE treats the Perl files like any other project file, with full control of updates, comments, tracking, etc.
  • The full (paid) version of KOMODO is probably a nice tool if you spend most of your time with the languages it focuses on.  The free version may be useful if your learning one of the supported languages and don’t have other tools available; however, it doesn’t offer enough capabilities to justify it over XCODE or Eclipse if you already use one of those.