Search Engine Optimization (SEO) is like whack-a-mole.

For the tl:dr crowd…  Google’s algorithms are constantly changing; and no matter the topic work in a least one mention of cats.  LOL.

 

https://www.startupgrind.com/blog/how-i-modified-my-seo-game-to-keep-up-with-google-in-2017-21/?utm_content=buffer6e14c&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

Advertisements

Situations where JIRA doesn’t meet the needs of a project (JRA-846).

When evaluating tools like JIRA, HP ALM, or IBM Rational, it’s important evaluate project needs vs product capabilities.  Obviously the costs of getting started with JIRA are much lower than some alternatives.  But sometimes, being penny-wise can result in being pound-foolish.

For a simple “MVC” type application with a limited set of components, it’s likely JIRA’s features will be adequate. Or project needs can be met with some minor customizations and/or plugins.

However, when managing ongoing development of systems which contain many levels of hierarchical components, the JIRA limitations may present significant obstacles.  For many years, there have been open feature requests regarding support for hierarchies.  As of March 4, 2014, JIRA’s response is that it will be another 12 months before they “fit this into their roadmap”.

Jira JRA-846 Support for subcomponents

For large distributed systems, with complex dependencies, this presents a significant challenge.

While setting up a new JIRA/Atlassian environment for a solution comprised of 8 major applications, I’ve found that it is not possible to create a hierarchy of subcomponents.  Nor is it possible to establish versioning for those subcomponents.  Instead, the JIRA data model and workflows are designed for all components of a project to exist as a flat list.  And for all components to be on the same version / release cycle.

For our solution, many of the major applications start with a commercial product, incorporate multiple modules, integrate an SDK, integrate 3rd Party plugins, and finish with custom coding of multiple subcomponents.  The design pattern is to establish interface boundaries, decouple the components, and enable components to be updated independently (some people call this SOA).

Now I’m am getting a clearer picture of when it is time to consider alternatives such as HP ALM or IBM Rational.  In the past, I’ve encountered several very successful JIRA implementations.  And I’ve encountered a number of failures.

Comparing my current experience of setting up a new “systems development” project in JIRA with those past experiences, now I understand the tipping point was a matter of component complexity.  JIRA’s architecture needs to be changed such that components can be containers for other objects, and can be versioned independently.  There are elegant/simple ways to introduce a data model which supports this, it will likely require them to refactor most (if not all) of their application stack.  Given their success with smaller projects, it’s easy to understand their business decision to defer these feature requests.

JIRA continues to recommended workarounds, and several 3rd party plugins attempt to address the gap.  Unfortunately, each of these workarounds are dependent upon the products internal data model and workflows.  JIRA themselves have discontinued development of features which support one of their suggested workarounds.  And some 3rd Party plugins have stopped development, most likely due to difficulties staying in sync with internal JIRA dependencies.

It can take six months to two years to get an HP ALM or IBM Rational solution running smoothly, and there are ongoing costs of operational support and training new developers.  However, there are use cases which justify those higher costs of doing business.

It’s unfortunate my current project will have to make do with creative workarounds.  But it has provided me an opportunity to better understand how these tools compare, and where the boundaries are for considering one versus the other.

Midwest gets surprise snow forecast in February

In todays news… Midwest gets surprise snow forecast in February.

Does anyone in the media realize how they sound when acting surprised by snow in the winter?  This does occur about the same time every year.

Will Cloud Computing be the Sub-Prime of IT?

Sub-prime lending enabled a great quantity of borrowers, lenders, and investors to participate in markets and transactions which they most often did not fully understand. In many (perhaps most) cases they did not understand the risk involved, did not understand the contracts they entered into, and were completely unprepared when risks became realities.

Similarly, public cloud computing services are enabling great quantities of new customers, service providers, and investors to make low cost entries into complex transactions with many poorly understood or entirely unknown risks.

The often low upfront costs combined with rapid activation processes for public cloud services are enticing to many cost conscious organizations. However, many of these services have complex pay as you go usage rates which can result in surprisingly high fees as the services are rolled out to more users and those services become a key component of the users regularly workflows.

Many public cloud services start out with low introductory rates which go up over time.  The pricing plans rely on the same psychology as introductory cable subscriptions and adjustable rate mortgages.

Additionally, there is often an inexpensive package rate which provides modest service usage allowances. Like many current cell phone data plans, once those usage limits are reached, additional fees automatically accumulate for:

  • CPU usage – sometimes measured in seconds or microseconds per processor core, but often priced as any portion of an hour.
  • Memory usage – measured by the amount of RAM allocated to a server or application instance.
  • Storage usage – usually measured by GB of disk space utilized, but sometimes still priced by the MB.  Sometimes charged even if only allocated but not utilized.
  • data transfer – often measured in GB inbound and/or outbound from the service. Many providers may charge data transfer fees for moving data between server (or service) instances within the same account.
  • IO – these is often nebulous and difficult to estimate in advance.  In the simplest definition, IO stands for Input and Output.  Many technology professionals get mired in long debates about how to measure or forecast IOs and what sort of computer activities should be considered.  It’s a term that is often applied to accessing a disk to load information into memory, or to write information from memory to disk.  If a service plan includes charges for IOs, it’s important the customer understand what they could be charged for.  A misbehaving application, operating systems, or hardware component can cause significant amounts of IO activity very quickly.

User accounts, concurrent user sessions, static IP addresses, data backups, geographic distribution or redundancy, encryption certificates and services, service monitoring, and even usage reporting are some examples of “add-ons” which providers will up sell for additional fees.

It is also common for public cloud service providers to tout a list of high profile clients. It would be a mistake to believe the provider offers the same level of service, support, and security to all of their customers. Amazon, Google, and Microsoft offer their largest customers dedicated facilities with dedicated staff who follow the customer’s approved operational and security procedures. Most customers do not have that kind of purchasing power.  Although the service providers marketing may tout these sort of high profile clients, those customers may well be paying for a Private Cloud.

Private Cloud solutions are typically the current marketing terminology for situations where a customer organization outsources hardware, software, and operations to a third party and contracts the solution as an “Operational Expense” rather than making any upfront “Capital Expenditures” for procurement of assets.

* Op-Ex vs Cap-Ex is often utilized as an accounting gimmick to help a company present favorable financial statements to Wall Street.  There are many ways an organization can abuse this and I’ve seen some doozies. 

Two key attractions for service providers considering a public cloud offering are the Monthly Recurring Charge (MRC) and auto renewing contracts.  The longer a subscriber stays with the service, the more profitable they become for the provider. Service providers can forecast lower future costs due to several factors:

  • Technology products (particularly hard drives, CPUs, memory, and networking gear) continue to get cheaper and faster.
  • The service provider may be able to use open source software for many of the infrastructure services which an Enterprise Organization might have purchased from IBM, Microsoft, or Oracle.  The customer organization could achieve these same savings internally, but is often uncomfortable and unfamiliar with the technologies and unwilling to invest in the workforce development needed to make the transition.
  • The service provider may also be able to utilize volume discounts to procure software licenses at a lower cost then their individual customers could.  For small customer organizations this often holds true.  For larger enterprise organizations this is usually a false selling point as the enterprise should have an internal purchasing department to monitor vendor pricing and negotiate as needed.  Unfortunately many large organizations can be something of a dysfunctional family and there may not be a good relationship between IT, customer business units, and corporate procurement.  Some executives will see outsourcing opportunities as the “easy way out” vs solving internal organizational issues.
  • Off-shore labor pools are continuing to grow both in size and in capability.  Additionally, the current economic circumstances have been holding down first world labor rates.
  • Service Providers can and do resell and out source with other Service Providers.  In the mobile phone industry there are numerous Mobile Virtual Network Operators (MVNOs) who contract for bulk rate services from traditional carriers and then market those services for re-sell under their own branding and pricing plans.  Many cloud service providers have adopted similar business models.

All of these cost factors contribute to the service provider’s ability to develop a compelling business case to its investors.

The subprime market imploded with disastrous consequences when several market conditions changed. New construction saturated many markets and slowed or reversed price trends. Many customers found they couldn’t afford the products and left the market (often thru foreclosures which furthered the oversupply). Many other customers recognized the price increases built into their contracts (variable rate mortgages) and returned to more traditional products (by refinancing to conventional loans). And many sub-prime lenders were found to have engaged in questionable business practices (occasionally fraudulent, often just plain stupid) which eventually forced them out of the business while leaving their customers and investors to clean up the mess.

Like the housing market, public cloud computing is on course to create an oversupply. Many of these cloud providers are signing customers up for contracts and pricing models which will be invalidated in a short time (as processing, storage, and bandwidth continue to get faster and cheaper). And few, if any, of these providers understand the risk environment within which they operate.

Public cloud computing is sure to have a long future for “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.

For personal and organizational computing of “inherently private” data the long value is questionable, and should be questioned.

Current public cloud services offer many customers a cost advantage for CPU processing. It also offers some customers a price advantage for data storage, but few organizations have needs for so called “big data”.  The primary advantage of public cloud services to many organizations is distributed access to shared storage via cheap bandwidth.

Competing on price is always a race to the bottom.  And that is a race very few ever truly win.

Public cloud service providers face significant business risks from price competition and oversupply.  We saw what happened to the IT industry in the early 2000‘s and these were two key factors.

Another factor is declining customer demand.  The capabilities of mobile computing and the capabilities of low cost on-site systems continues to grow rapidly.  In todays pricing, it may be cheaper to host an application in the cloud than to provide enough bandwidth at the corporate office(s) for mobile workers.  That is changing rapidly.

A T1 1.5MB connection used to cost a business several thousand dollars per month.  Now most can get 15MB to 100MB for $79 per month.  As last mile fiber connectivity continues to be deployed, we’ll see many business locations have access to 1GB connections for less than $100 per month.

All of those factors are trumped by one monster of a business risk facing public cloud service providers and customers today.  How should they manage the security of inherently private data.

Many organizations have little to no idea of how to approach data classification, risk assessment, and risk mitigation.   Even the largest organizations of the critical infrastructure industries are struggling with the pace of change, so it’s no surprise that everyone is else behind on this topic.  Additionally, the legal and regulatory systems around the world are still learning how to respond to these topics.

Outsourcing the processing, storage, and/or protection of inherently private data does not relieve an organization from it’s responsibilities to customers, auditors, regulators, investors or other parties who may have a valid interest.

Standards, regulations, and customer expectations are evolving.  What seems reasonable and prudent to an operations manager in a mid-sized organization might appear negligent to an auditor, regulator, or jury.  What seems ok and safe today could have disastrous consequences down the road.

Unless your organization is well versed in data classification and protection, and has the ability to verify a service providers operational practices, I strongly recommend approaching Public Cloud services with extreme caution.

If your organization is not inherently part of the public web services “eco-system”, it would be prudent to restrict your interactions with Public Cloud computing to “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.  At least until the world understands it a bit better.

The costs of processing and storage private data will continue to get cheaper.  If you’re not able to handle your private data needs in house there are still plenty of colocation and hosting services to consider.  But before you start outsourcing, do some thoughtful housekeeping.  Really, if your organization has private data which does not provide enough value to justify in house processing, storage, and protection… please ask yourselves why you even have this data in the first place.

COTS and corporate consumerism.

It is truly amazing the number of companies who go on consumeristic shopping sprees buying so called “COTS packages” in hopes of instant gratification.

The marketing wordsmiths of the software industry have achieved great results in convincing folk the definition of COTS is something like:

Short for commercial off-the-shelf, an adjective that describes software or hardware products that are ready-made and available for sale to the general public. For example, Microsoft Office is a COTS product that is a packaged software solution for businesses. COTS products are designed to be implemented easily into existing systems without the need for customization.

Sounds great doesn’t it.  Here is a portion of an alternate definition which rarely makes it into the marketing brochures:

“typically requires configuration that is tailored for specific uses”

That snippet is from US Federal Acquisition Regulations for “Commercial off-the-shelf” purchases.

In other words, most of the COTS packages should at least come with a “some assembly required” label on the box.  Granted, most vendors do disclose the product will need some configuration.  But most gloss over the level of effort involved, or sell it as another feature.  And most organizations seem to assign procurement decisions to those least able to accurately estimate implementation requirements.

The most offensive of these scenarios involves developer tools and prepackaged application components for software development shops. SDKs and APIs are not even close to being a true COTS product, but numerous vendors will sell them to unsuspecting customers as “ready to use” applications.

If the organization has a team of competent software developers… then really, what is the point of purchasing a “COTS” package which requires more customization (through custom software development) than just developing the features internally?

Some vendors have sold the idea that they provide additional benefits you wouldn’t get from developing it internally.  Such as:

  • packaged documentation comes with the software.
  • vendor gets feedback from many customers and makes it better for everyone.
  • vendor specializes in supporting the product.

Those are all suspect.

  • If the product requires customization, will the vendor provide custom documentation?  If not, their pre-packaged documentation will likely be useless.  The only authoritative source of documentation for source code… is the source code.  Good coding standards, including commenting and version control statements, will provide far more value than a collection of PDFs from VendorX.
    • Can the vendor provide an IDE plug-in which integrates Class, Method, API, and Interface documentation with the rest of your language environment?
    • Can the vendor be expected to keep these up date for the development tools your team uses?
  • Increasingly,  Vendors are no longer the best or primary source of product information.  User communities increasingly evolve independently of specific vendors.  Many online user communities begin with the overall service or concept involved, and develop sub groups for specific vendor products.  As a result, it is increasingly easier to compare and contrast information for many competing products at a site which shares common interests and contexts.
  • Vendor support comes in many flavors, and not all of it equally useful (or affordable) to all customers.
    • If the customer configuration is complex, continuity of support personnel is important.  Dedicated support from a large software vendor can run $1 Million per year per named individual providing support.  Otherwise your support calls go into the general queue with no continuity.
    • Large (publicly traded) software vendors operate on a financial basis which makes it difficult for them to run large scale professional services businesses.  Most every company that tries to combine product with large scale (i.e. thousands of staff consultants) professional services eventually implodes due to the cultural and financial conflicts between the two lines of business.

Failed software implementations can drive a company into the ground.  Complex COTS packages which only serve as a component to be “integrated” into customer systems through custom programming can often be a major contributing factor to project/program failures.  The larger the failure, the less likely the organization can retain sufficient stakeholder trust to try again.

Organizations with existing capabilities for large scale internal software development should reconsider the mantra of “All COTS, all the time, everywhere.”

US corporate financial practices haven’t just indoctrinated the citizenry into consumerism.  They’ve equally indoctrinated organizations of all kind.  Before you make that next COTS purchase order, pause, and give a moments consideration to “producerism”.  The long term benefits could be startling.

By the way, this phenomenon isn’t limited to software components.  I’ve seen organizations procure “appliances” at six figure costs because they perceived it to provide an application service which would save them $1 or $2 Million in software development costs downstream.  Unfortunately, they eventually learned it would require an additional $2 to $5 Million of software development costs to modify their application portfolio to work with these appliances.  After spending (wasting) 18 months and over $1 Million, they eventually found a solution they implemented internally with very little cost (simply replaced an old/deprecated programming language API with a newer one).

MS released Win 8 and 44% of internet is down today. Coincidence?

I think the Internet saw Microsoft’s new baby and vomited.

-from the Department of What Could Possibly Go Wrong?

General Delivery @ The Planet Earth Internet

Just read something on another blog that left we with one of those Wow/Aha feelings…

“Google yourself from time to time to get your mail.”

Once upon a time folk in this country could post a letter to someone for “general delivery” at which post office the intended recipient might be expected to pass by during their journeys.  Upon arrival at a new town, a traveler would just pop into the local post office and ask if their was anything waiting for them.  I even used this once myself for something to large to fit in a mailbox.

As we watch the US Postal Service begin closing locations, it never occurred to me to wonder how someone might replace the concept of “general delivery”.  But the above referenced blog instruction demonstrates that the Internet can indeed handle general delivery just as well as email.  Pretty cool.

Signs at a bowling alley in Katy, TX.

Shoe Rental: Adults: $2.00. Seniors and Children: $2.00.
Hours: Sun-Thurs: 10:00 AM - Closing. Fri-Sat: 8:00 AM - Closing.

Missouri River flood plain, levee acres vs CRP acres.

Been seeing a lot of reports that approx 500,000 acres of farm ground will be flooded out in the Missouri River Basin.That got me wondering how much ground the Missouri River levee system was actually “protecting”.
And also wondering how much ground we’re paying people not to farm (under CRP).
So I looked up some numbers.

from the USGS website:

  • From bluff to bluff, the river-floodplain below Sioux City, Iowa, covers 1.9 million acres. Historically, the river meandered across more than one-fourth of this floodplain acreage. This “meander belt” contained a variety of fish and wildlife habitats including wetlands, sandbars, wet prairies, and bottomland forests. Seasonal floods provided the water needed to replenish shallow-water habitats used for fish and wildlife breeding and growth.
  • Nearly 354,000 acres of meander belt habitat were lost to urban and agricultural floodplain development.

From the USDA’s June 13, 2001 press release on CRP enrollments:

  • For this 41st general CRP sign-up, more than 38,000 offers were received on about 3.8 million acres nationwide. Enrollment of the 2.8 million acres will bring the total enrollment in the program to 29.9 million acres, leaving sufficient room under the 32-million-acre cap to continue enrollment in the Conservation Reserve Enhancement Program, continuous sign-up and other CRP initiatives. The Secretary has asked FSA to continue to consider ways to use continuous enrollments to ensure CRP contains those lands that are most erodible, most valuable to wildlife or that otherwise ensure the program targets the most vulnerable acres.

A word of caution about ITSM and ITIL

ITSM 2011-01-25: New words for old concepts

Deming & Drucker both warned against buying into fads.  ITIL was established by the UK govt and developed extensively by IBM.  Much like PMI it has evolved into a commercial beast churning new books, course, and tests.  Not surprisingly IBM and others have taken ITIL to a new level of commercialization with endless products, software modules, and consulting engagements.
It is good for organizations to develop a methodology that incorporates standard language and processes for relatively universal concepts and requirements.  However, switching methodologies to keep up with popular fads makes about as much sense as requiring your workforce to switch from English to French on some arbitrary calendar date recommended by a business process re-engineering consultant.
Some of the things which came before ITIL:
PMI / PMP
Prince
SEI / CMM
ISO
TQM
SixSigma
Engineering Mgt
GEM QA
NIST UL
Telcordia / Bellcore
ITU / IEEE / IETF
21 CFR Part II
USMC BAMCIS