Search Engine Optimization (SEO) is like whack-a-mole.

For the tl:dr crowd…  Google’s algorithms are constantly changing; and no matter the topic work in a least one mention of cats.  LOL.

 

https://www.startupgrind.com/blog/how-i-modified-my-seo-game-to-keep-up-with-google-in-2017-21/?utm_content=buffer6e14c&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

Advertisements

Will Cloud Computing be the Sub-Prime of IT?

Sub-prime lending enabled a great quantity of borrowers, lenders, and investors to participate in markets and transactions which they most often did not fully understand. In many (perhaps most) cases they did not understand the risk involved, did not understand the contracts they entered into, and were completely unprepared when risks became realities.

Similarly, public cloud computing services are enabling great quantities of new customers, service providers, and investors to make low cost entries into complex transactions with many poorly understood or entirely unknown risks.

The often low upfront costs combined with rapid activation processes for public cloud services are enticing to many cost conscious organizations. However, many of these services have complex pay as you go usage rates which can result in surprisingly high fees as the services are rolled out to more users and those services become a key component of the users regularly workflows.

Many public cloud services start out with low introductory rates which go up over time.  The pricing plans rely on the same psychology as introductory cable subscriptions and adjustable rate mortgages.

Additionally, there is often an inexpensive package rate which provides modest service usage allowances. Like many current cell phone data plans, once those usage limits are reached, additional fees automatically accumulate for:

  • CPU usage – sometimes measured in seconds or microseconds per processor core, but often priced as any portion of an hour.
  • Memory usage – measured by the amount of RAM allocated to a server or application instance.
  • Storage usage – usually measured by GB of disk space utilized, but sometimes still priced by the MB.  Sometimes charged even if only allocated but not utilized.
  • data transfer – often measured in GB inbound and/or outbound from the service. Many providers may charge data transfer fees for moving data between server (or service) instances within the same account.
  • IO – these is often nebulous and difficult to estimate in advance.  In the simplest definition, IO stands for Input and Output.  Many technology professionals get mired in long debates about how to measure or forecast IOs and what sort of computer activities should be considered.  It’s a term that is often applied to accessing a disk to load information into memory, or to write information from memory to disk.  If a service plan includes charges for IOs, it’s important the customer understand what they could be charged for.  A misbehaving application, operating systems, or hardware component can cause significant amounts of IO activity very quickly.

User accounts, concurrent user sessions, static IP addresses, data backups, geographic distribution or redundancy, encryption certificates and services, service monitoring, and even usage reporting are some examples of “add-ons” which providers will up sell for additional fees.

It is also common for public cloud service providers to tout a list of high profile clients. It would be a mistake to believe the provider offers the same level of service, support, and security to all of their customers. Amazon, Google, and Microsoft offer their largest customers dedicated facilities with dedicated staff who follow the customer’s approved operational and security procedures. Most customers do not have that kind of purchasing power.  Although the service providers marketing may tout these sort of high profile clients, those customers may well be paying for a Private Cloud.

Private Cloud solutions are typically the current marketing terminology for situations where a customer organization outsources hardware, software, and operations to a third party and contracts the solution as an “Operational Expense” rather than making any upfront “Capital Expenditures” for procurement of assets.

* Op-Ex vs Cap-Ex is often utilized as an accounting gimmick to help a company present favorable financial statements to Wall Street.  There are many ways an organization can abuse this and I’ve seen some doozies. 

Two key attractions for service providers considering a public cloud offering are the Monthly Recurring Charge (MRC) and auto renewing contracts.  The longer a subscriber stays with the service, the more profitable they become for the provider. Service providers can forecast lower future costs due to several factors:

  • Technology products (particularly hard drives, CPUs, memory, and networking gear) continue to get cheaper and faster.
  • The service provider may be able to use open source software for many of the infrastructure services which an Enterprise Organization might have purchased from IBM, Microsoft, or Oracle.  The customer organization could achieve these same savings internally, but is often uncomfortable and unfamiliar with the technologies and unwilling to invest in the workforce development needed to make the transition.
  • The service provider may also be able to utilize volume discounts to procure software licenses at a lower cost then their individual customers could.  For small customer organizations this often holds true.  For larger enterprise organizations this is usually a false selling point as the enterprise should have an internal purchasing department to monitor vendor pricing and negotiate as needed.  Unfortunately many large organizations can be something of a dysfunctional family and there may not be a good relationship between IT, customer business units, and corporate procurement.  Some executives will see outsourcing opportunities as the “easy way out” vs solving internal organizational issues.
  • Off-shore labor pools are continuing to grow both in size and in capability.  Additionally, the current economic circumstances have been holding down first world labor rates.
  • Service Providers can and do resell and out source with other Service Providers.  In the mobile phone industry there are numerous Mobile Virtual Network Operators (MVNOs) who contract for bulk rate services from traditional carriers and then market those services for re-sell under their own branding and pricing plans.  Many cloud service providers have adopted similar business models.

All of these cost factors contribute to the service provider’s ability to develop a compelling business case to its investors.

The subprime market imploded with disastrous consequences when several market conditions changed. New construction saturated many markets and slowed or reversed price trends. Many customers found they couldn’t afford the products and left the market (often thru foreclosures which furthered the oversupply). Many other customers recognized the price increases built into their contracts (variable rate mortgages) and returned to more traditional products (by refinancing to conventional loans). And many sub-prime lenders were found to have engaged in questionable business practices (occasionally fraudulent, often just plain stupid) which eventually forced them out of the business while leaving their customers and investors to clean up the mess.

Like the housing market, public cloud computing is on course to create an oversupply. Many of these cloud providers are signing customers up for contracts and pricing models which will be invalidated in a short time (as processing, storage, and bandwidth continue to get faster and cheaper). And few, if any, of these providers understand the risk environment within which they operate.

Public cloud computing is sure to have a long future for “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.

For personal and organizational computing of “inherently private” data the long value is questionable, and should be questioned.

Current public cloud services offer many customers a cost advantage for CPU processing. It also offers some customers a price advantage for data storage, but few organizations have needs for so called “big data”.  The primary advantage of public cloud services to many organizations is distributed access to shared storage via cheap bandwidth.

Competing on price is always a race to the bottom.  And that is a race very few ever truly win.

Public cloud service providers face significant business risks from price competition and oversupply.  We saw what happened to the IT industry in the early 2000‘s and these were two key factors.

Another factor is declining customer demand.  The capabilities of mobile computing and the capabilities of low cost on-site systems continues to grow rapidly.  In todays pricing, it may be cheaper to host an application in the cloud than to provide enough bandwidth at the corporate office(s) for mobile workers.  That is changing rapidly.

A T1 1.5MB connection used to cost a business several thousand dollars per month.  Now most can get 15MB to 100MB for $79 per month.  As last mile fiber connectivity continues to be deployed, we’ll see many business locations have access to 1GB connections for less than $100 per month.

All of those factors are trumped by one monster of a business risk facing public cloud service providers and customers today.  How should they manage the security of inherently private data.

Many organizations have little to no idea of how to approach data classification, risk assessment, and risk mitigation.   Even the largest organizations of the critical infrastructure industries are struggling with the pace of change, so it’s no surprise that everyone is else behind on this topic.  Additionally, the legal and regulatory systems around the world are still learning how to respond to these topics.

Outsourcing the processing, storage, and/or protection of inherently private data does not relieve an organization from it’s responsibilities to customers, auditors, regulators, investors or other parties who may have a valid interest.

Standards, regulations, and customer expectations are evolving.  What seems reasonable and prudent to an operations manager in a mid-sized organization might appear negligent to an auditor, regulator, or jury.  What seems ok and safe today could have disastrous consequences down the road.

Unless your organization is well versed in data classification and protection, and has the ability to verify a service providers operational practices, I strongly recommend approaching Public Cloud services with extreme caution.

If your organization is not inherently part of the public web services “eco-system”, it would be prudent to restrict your interactions with Public Cloud computing to “inherently public” services such as media distribution, entertainment, education, marketing, and social networking.  At least until the world understands it a bit better.

The costs of processing and storage private data will continue to get cheaper.  If you’re not able to handle your private data needs in house there are still plenty of colocation and hosting services to consider.  But before you start outsourcing, do some thoughtful housekeeping.  Really, if your organization has private data which does not provide enough value to justify in house processing, storage, and protection… please ask yourselves why you even have this data in the first place.

WordPress dot com’s “Visual” editor sucks, and it’s “HTML” mode sucks worse.

Update: This post title started out as “Figuring out WordPress dot com’s Visual and HTML editor modes”.  However, as I spent more and more time sussing out the workings of these, my opinion dropped a bit.

Since my search for an offline WordPress.com editor was a bust and my efforts to learn their online interface aren’t going well either, I suspect my best long term option is going to be using Composer to develop some custom templates which I can fill in offline and then Copy/Paste to the WordPress online “HTML” interface.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If you’ve stumbled across this entry and are hoping it will help explain the mysteries of the WordPress.com online editing interface… well, don’t count on it.  I’m still struggling to wrap my brain around their concepts of of “Visual” and “HTML”.

Visual it isn’t.  That would be a “WYSIWYG” editor.  And this thing falls way short.  Pretty bizarre considering it’s an editor dedicated to solely to writing for their web service.  Other “offline” WYSIWYG editors have had to contend with never being sure what kind of a web server the resulting pages would be displayed from.  The WordPress team has the luxury of only needing their editor to be correct with their website.

As for “HTML” mode, it’s not what I was expecting either.  It hides some of the basic HTML tags, probably in the interesting of being more “beginner friendly”.  Unfortunately there aren’t any menus to guide a beginner with selection/application of HTML tags.  So the “beginner friendly” idea falls flat too.  I’m not HTML guru, but in the past I have hand coded some web pages using plain text editors.  For years I used Netscape Composer to create and maintain project documentation and reports.  Composer supports pure HTML editing as well as a WYSIWYG mode.  Fortunately Composer is still alive and well in the Mozilla SeaMonkey Project.

If the WordPress.com teams want the “HTML” mode of their online editor to be friendly, they should include some menu options.  If they want it to be expert friendly, then it should really show all of the tags.  In either case, I hope they at least look at some of the other tools out there.

Below are the results of some testing I’ve been doing to grasp the behaviors of the “Visual” and “HTML” modes in this editor tool.  It’s fairly random, nonsensical stuff.  Just a lot of typing things in and switching between Visual, HTML, and preview modes to see what effect things have.  I started out trying to solve one of my most glaring frustrations with the editor… single spacing vs double space.

I’ve found the line spacing tricks.  Either have to switch back and forth between Visual and HTML modes, or while in the Visual Editor mode, the key combination “SHIFT+RETURN”  will do a normal new line (without the double spacing).  Note: solving the next problem brought the double spacing back.

Next is finding how to get the editor to respect indentation of the first line in a new paragraph.  None of the normal HTML tag tricks I’ve used elsewhere work here, and I don’t like the idea of embedding transparent images to force alignments.  Quite a few folks across the web recommend using CSS, but I wanted to try inline HTML tags first.  Some further searching turned up a suggestion to wrap the paragraph in a tag like this:

<p style="text-indent: 2em;">  Put the text of your paragraph here. </p>

Of course, that led to the question of how to quote source code in a WordPress.com post. Fortunately I found the answer from WordPress support.

Time for another save and preview.

Wow! What a waste of time.  Anyone reading this will probably notice that the paragraphs are still double spaced.  Added the indentation tag modified the paragraph spacing (this is starting to remind me why I’ve never enjoyed coding web pages).  And the extra “code” which was supposed to display embedded source code is actually displayed right along with the HTML that I was trying to display.  In other words, I did not intend for the [Source language blah blah blah] stuff to be visibly displayed… only the actual HTML tag contained within should have been displayed.

Using the HTML editor to “fix” the source code quoting seems to have turned it into a pre formatted paragraph ?  Time for another preview.

Well, that was closer.  Returning from preview I think I can see what needs to be fixed in the Visual Editor.  The embedded HTML tag for paragraph formatting should now display as an inline code snippet.

Ok, I’ve had enough of this for now.  When I pick this up again, I’m going to focus on creating an offline SeaMonkey Composer template that can be pasted into the WordPress HTML editor interface.  I’m sure that will require some trial and error to figure out compatible tags and fit things within the active “Theme” layout.

Additionally, I need to do some more Theme research.  I’ve got a couple web pages (which I built nearly 10 years ago) that have displayed just fine on four other hosting services, but the every WordPress theme I’ve tried mangles them.  Redesigning them isn’t an option, so they may just remain with the other hosts.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

How now brown cow.

How do you get the line returns to use a smaller spacing?

The editor seems to be “double spacing” each new line.

Really looks like 2nd grade.

On other hand, if you keep typing more sentences without a line return (ie., run on paragraphs), then the automagic line spacing seems to like “single spacing” just fine.  So what does this WordPress “Visual” editor have against single spaced paragraphs?  And how to you override it?
Continue reading WordPress dot com’s “Visual” editor sucks, and it’s “HTML” mode sucks worse.

Seeking an offline editor for WordPress (for OS X Lion).

With a need to write up additional notes on several projects, I’ve decided to search out some options for offline writing which could later be uploaded into WordPress.  Copy/paste is ok for occasional notes, but I’d much prefer a more robust solution.  I already have apps which do plain text which can be used to copy/paste a simple note into the WordPress web interface.  What I’d like is an offline app to do more advanced formatting, so blog posts look more like structured articles rather and less like random blobs of text.

Since I spend a lot of time working in Xcode, it needs to be a Mac desktop application.

Results of testing WordPress posts from some apps I already have:

  • Apple Pages – no export to HTML. No easy way to post to WordPress.  Leaves me back at the copy/paste situation.  I have previously posted content from Pages documents.  While it wasn’t as bad as a root canal, it required enough format re-jiggling to make it impractical for everyday blogging needs.
  • Apple iBooks Author – great application, but no export to HTML and only publishes to iBook store.
  • Apple Keynote – similar situation to Pages. Requires using copy/paste and re-working the results to appear correctly.
  • Seamonkey (a mozilla based editor) – this is the successor to the old Netscape Suite, and one of the components of the suite is Composer.  Composer is the only component of the Seamonkey suite that I use, so I’ve configured the settings to always open a new Composer document.  It’s a decent little HTML editor.  The “publish” options seem to accept the recommended settings for WordPress.  Composer’s publish operation shows a successful WordPress login and completion status; however, something doesn’t match up because the results never appear anywhere in my WordPress account.  In any event, Seamonkey’s publish feature is designed for updating specific webpages, so I’ll continue looking at other apps for a better solution.

Results of some apps found listed on WordPress:

  • Scribefire – mentioned on a blog entry about about WordPress editors.  It’s a browser plugin.  Versions are available for Chrome, Firefox, and Safari.  First things I noticed were mostly bad reviews and an extreme lack of documentation. I tested the Firefox version, and didn’t care for it.  It wasn’t clear where the draft posts were being stored, but the dialogs in the “export” option seemed to confirm work was not being saved in a document format that could be accessed without this plug in.  That doesn’t work for me.
  • Blogo – Website states Blogo won’t run on OS X Lion.  Reading their blog from July 2011 showed they weren’t even going to try updating for Lion.  Movin’ on.
  • Ecto – Website indicates “illumineX” bought the app from it’s developer in 2008.  Website seems to have been neglected since then.  A described iPhone app never materialized (was a pre-IOS app if it existed at all).  Their support forums shows a SQL crash error code and looks like it’s been neglected since 2007.  Despite all of this, I decided to download their trial version and take a look.  Download was a 5.9MB ZIP file containing a 12.4MB app file dated April 15, 2010.  Surprisingly the app launched, ran just fine, and was able to log in and download my previous WordPress posts.  After editing a draft via the web interface, Ecto detected and showed the difference between the local offline copy and the server copy.  Everything seemed to going quite well, I created a blog entry from the app and was able to adjust categories, tags, and other attributes.  But attempting to publish the post resulted in an error and a loss of all of the edits made in the app (despite enabling an autosave feature).  Showed promise, but not worth troubleshooting given the apparent lack of activity with the company.  And we’re walking…
  • MarsEdit – a $40 app available from developer’s website, or on the Mac app store. Website is up to date, shows plenty of activity, and developer’s blog shows he is working on learning/adopting Apple’s new app guidelines for the Mac App Store.  I downloaded the trial version from website.  Currently version 3.4.4 with a 6.6MB zip file containing a 15.2MB app file dated March 9, 2012.  First (simple) tests worked well.  MarsEdit finds existing posts from the blog, edits new ones, uploads to Publish or Draft, and saves offline copy in a Library folder than can be read with other tools if need be. Testing more advanced work such as text formats and image layout didn’t work so well.  It was easy enough to paste an image into the editor, but there wasn’t any way to adjust the image properties.  Additionally, the editor showed options for laying out the text around the image, but it didn’t work.  There were problems within the editor, and the layout was ignored when uploaded to WordPress.  Conclusion, MarsEdit would be great for “syncing” plain text notes into blog posts… but not at $40.  Unfortunately I don’t see $40 worth of editor/layout features in this app.

I’ll keep looking for an offline editor to use with WordPress.  But, in the meanwhile, it looks like I’ll stick with using Pages, Keynote, or Notes to write things up offline and then use copy/paste to post later.

For now it appears anything more than plain text will continue to require using the web interface to make adjustments prior to publishing a post.  I suspect the long term solution will be to get more familiar the WordPress HTML options and investigate whether some Themes are easier to work with than others.

Run multiple instances of Firefox

do shell script "/Applications/Firefox.app/Contents/MacOS/firefox-bin -P TargetProfileName TargetURL > /dev/null 2>&1 &"
Substitue your required values for TargetProfilename and TargetURL as needed.
Multiple instances of this command can entered into an AppleScript and saved for resuse as needed.  The above command creates “windowless” processes.  I like to have a small browser window appear for each session launched by the script, so I use the syntax:
 do shell script "/Applications/Firefox.app/Contents/MacOS/firefox-bin -P TargetProfilename TargetURL &> /dev/null &"

I’ve found it helps to separate each command with a “delay 1” command in the script.  You may want to experiment with this depending upon your machine’s capabilities and the complexity of the TartgetURL you’re launching in each instance of the browser.

By incorporating browser plugins such as Greasemonkey, YSlow, and Firebug you can automate a wide range of web site testing scenarios.  This technique is also handy for scripting the automatic and schedule retrieval of data from web servers.  A couple associates use this for retrieving application server logs and the daily market data for their personal stock portfolio.

For Windows users, this shell script will accomplish similar results on WinXP:

save the following script as a Windows XP batch file.

This script includes a menu to choose between launching all configured profiles, a single profile/tab, the profile manager, or just exiting without doing anything.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

@echo off
cls

set UserChoiceIs=

echo  Would you like to launch all available Mozilla Firefox profiles and tabs?.
echo      Select A, to launch all profiles and tabs.
echo      Select S, to launch a single profile with only 1 tab.
echo      select P, to launch the Mozilla Firefox profile manager.
echo      Select E, to exit (launch nothing).

set /p UserChoiceIs={a,s,p,e}
IF NOT "%UserChoiceIS%"=='' SET UserChoiceIs=%UserChoiceIs:~0,1%
IF /I "%UserChoiceIs%"=="a" GOTO ALLPROFILES
IF /I "%UserChoiceIs%"=="s" GOTO SINGLETAB
IF /I "%UserChoiceIs%"=="p" GOTO GETPROFILEMANAGER
IF /I "%UserChoiceIs%"=="e" GOTO ENDGOODBYE

:ALLPROFILES
REM echo you've selected to launch all profiles/tabs.
REM pause
start "" "C:\Program Files\Mozilla Firefox\firefox.exe" -p 1stTargetProfile -no-remote
start "" "C:\Program Files\Mozilla Firefox\firefox.exe" -p 2ndTargetProfile -no-remote
start "" "C:\Program Files\Mozilla Firefox\firefox.exe" -p 3rdTargetProfile -no-remote
GOTO ENDANDEXIT

:SINGLETAB
REM echo you've selected to launch a single profile/tab.
REM pause
start "" "C:\Program Files\Mozilla Firefox\firefox.exe" -p 1stTargetProfile -no-remote "about:robots"
GOTO ENDANDEXIT

:GETPROFILEMANAGER
REM echo you've select to launch the Mozilla Firefox profile manager.
REM pause
start "" "C:\Program Files\Mozilla Firefox\firefox.exe" -p
GOTO ENDANDEXIT

:ENDGOODBYE
REM echo you've selected to exit
REM pause
exit

:ENDANDEXIT
REM echo Goodbye.
REM pause
exit

High Performance Web Sites

I’m currently reading the 1st edition from 2007:  High Performance Web Sites: Essential knowledge for front end engineers.
   In the near term, Chapter 4 (which talks about Rule 2: Use a Content Distribution Network) wouldn’t be applicable for smaller government organizations hosting from their own data center.  But in the long term it would be a good concept to include within the basis of discussing a transition to federal cloud computing… ie., looking for a federal cloud which includes geographic diversity appropriate for getting front-end content closer to the users.
  Chapter 16 (Rule 14: Make AJAX Cacheable) probably doesn’t apply either.
  The other Chapters/Rules should be required reading for all Architects and Developers working with web applications.  At 138 pages (including the two chapters which could be skipped), it’s a pretty quick read.
The author put out a second book in 2009, but it might be a little more advanced than what FSA needs to get started with: Even Faster Web Sites: Performance Best Practices for Web Developers.
Both of these are O’Reilly books.  I’m reading the High Perf book via the IEEE connection to SafariBooksOnline.  The author previously worked at Yahoo on web performance, and has since moved to the performance team at Google.  He also developed the performance analysis extension for FireBug.

Developing for Performance: application profiling can save you millions.

Recently read up on some features of Google’s AppEngine.  They impose quotas and limits on developers who use AppEngine for hosting.

My favorite is limiting processing time to a maximum of 30 seconds per transaction.  There are also quotas about things such as the number of URL call, image manipulations, data transmission per transaction, memcache api calls, and so on.

The reaction so far seems to be a consensus that they’ve come up with terms of service which reflect good coding and performance standards for a web application… and figured out how to monitor/enforce.  (Google does offer developers the option for paying extra if they can’t squeeze their transaction into the free quotas).

They’ve built application profiling into AppEngine and provided developers the option to use profile_main( )  {instead of the standard main( )} to invoke the profiling tools.  The profiling fuctions write all of the performance data to a log.

I’m not saying I know of any organizations who could have saved a couple hundred million dollars by challenging their developers to utilize existing application profiling tools to meet a similar set of performance standards, in lieu of purchasing hundreds of new servers.  Well, OK, I am saying that.

If you know someone with application performance problems whose thinking about purchasing substantial amounts of new hardware, you might suggest they try some application performance profiling first.   Without this sort of performance (and capacity) information, it’s likely the new equipment will either be deployed to the wrong places.  Or worse, it will be utterly excessive for the situation at hand.
Below are some of the the transaction specific limits from:

http://code.google.com/appengine/docs/quotas.html

Quotas and Limits

Each incoming request to the application counts toward the Requests quota.

Data received as part of a request counts toward the Incoming Bandwidth (billable) quota. Data sent in response to a request counts toward the Outgoing Bandwidth (billable) quota.

Both HTTP and HTTPS (secure) requests count toward the Requests, Incoming Bandwidth (billable) and Outgoing Bandwidth (billable) quotas. The Quota Details page of the Admin Console also reports Secure Requests, Secure Incoming Bandwidth and Secure Outgoing Bandwidth as separate values for informational purposes. Only HTTPS requests count toward these values.

CPU processing time spent executing a request handler counts toward the CPU Time (billable) quota.

For more information on quotas, see Quotas, and the “Quota Details” section of the Admin Console.

In addition to quotas, the following limits apply to request handlers:

Limit Amount
request size 10 megabytes
response size 10 megabytes
request duration 30 seconds
simultaneous dynamic requests 30 *
maximum total number of files (app files and static files) 3,000
maximum size of an application file 10 megabytes
maximum size of a static file 10 megabytes
maximum total size of all application and static files 150 megabytes