Wednesday, November 14, 2012

MLearning is Dead Long Live Mlearning

MLearning Is Dead, Long Live MLearning

MLearning died before it was born because the emphasis is on the mobile and on the learning and not the learner. In essence we missed the boat on basic composition: who, what, where, when why and how. Oh we got the What - it's MLearning. One day soon it will be holographic, real-time games in virtual worlds that are affordable, achievable and real. Sensing any sarcasm?


On November 8, 1883, while workers were building additions to the south wing of the third capitol building, a large portion of structure collapsed, killing five workers. Frank Lloyd Wright was an eyewitness and left this account in his Autobiography: "The interior columns had fallen and the whole interior construction was a gigantic rubbish heap in the basement,.... Whitened by lime dust as sculpture is white, men with bloody faces came plunging wildly out of the basement entrance blindly striking out about their heads with their arms, fighting off masonry and falling beams. Some fell dead on the grass under the clear sky. Others fell insensible. One workman, lime-whitened too, hung head-downward from a fifth story window, pinned to the sill by an iron beam on a crushed foot, moaning the whole time. A ghastly red stram ran from him down the stone wall."

Perhaps the architect of the South Wing of the Wisconsin Capitol Building was responsible, perhaps it was the builder, who knows - but one can certainly surmise that less thought was giving to the planning and more to the execution. In fact architecture wasn't even a real profession in those days. Professional architects weren't really licensed in most states. Wisconsin, as a result of this and other incidents, was one of the first. What probably wasn't asked (because it wouldn't have been popular) are things like: is this a safe environment? Are the materials sound, do the workers have the necessary training and skill to do the job?, What are the weather conditions going to be like? Will the materials being used work in the weather conditions as expected? Are the weights and proportions being used appropriate according to mechanical calculations? Do project punchlists conform to standard site requirements associated with similar projects? But ultimately this project probably failed and cost lives not just because thought wasn't put into the above questions - but probably because short-cuts were taken in order to save: money. Sound familiar?

Follow the Money


Likwise in MLearning we have to follow the money. The money is in all the bells and whistles that make mobile learning sing. Forget why someone needs mobile learning, whether adults are motivated to learn on tiny screens, where they're going to stay glued to it and forget the notion altogether of evaluation - MLearning is "here to stay" or is it it?

Modern society is absolutely obsessed with execution. I've noted that as the gadfly in the ointment - when I question whether something is worthwhile I receive a quick gasp and feel like the kid that just asked where babies come from. Horrors, little Johnny has asked the unthinkable: should we even be doing this?

MLearning has become such a phenomonon and people have put so many resources into it that I doubt I could stop it even if I wanted to. The point being that MLearning is like anything else: if it's worthwhile it takes time and effort.

Worthwhile Things Take Time 


I know, I know it's slow and old fashioned the ideal that things worthwhile take time. When I was a teen there was a wine ad and a famose Hollywood celebrity would remark, "we shall sell no wine before it's time." So the Analysis phase in the ADDIE model is slow, painful, asks embarrassing ("is this likely to succeed?") style questions that make adults in certain mid-level positions squirm with discomfort. (I. e. he's asking whether my project is even worthwhile - eek). No, no I'm asking basics: what's the business objective? Who is taking ownership of the project?, Who is the audience?, What is it they need to learn?, What objectives must be fulfilled in order for them to perform "x" on the job? Which is different than, what steps are required to complete a given task? What knowledge (that is more generalized knowledge) is required? Have the adults involved done these things before? How many / what percentage out of the whole? What on the job evidence shows they have mastered the learning? What new behaviors should I observe if the training is successful? What behaviors are we observing right now? 

Notice the above questions don't ask - how large should the screen size be? What should be on the screen? Those considerations come much later. But the screen size, what should be on the screen, how it should be laid out - that's all valid to the learning but it's closer to the development portion of the ADDIE model - it occurs at the end of the Analysis phase. 

But I Just....


This is about the time most people's eyes gloss over and they begin to sigh. That's because they know "this" MLearning, my MLearning will take TIME. No, no, they protest, I just need something simple to teach - "X". Hmmmm really? If it's that simple - then why does it need to be available via MLearning. Why not just hold a webinar - and let's plan that - or how about a job aid? BIgger sigh. "I guess  so...maybe..." Oh that's not what they hand in mind. They wanted something on the screen - well I can do that - I can create an interactive PDF that people can access on the screen. Bigger sigh. "Yeah". So they go off and create a training class themselves because the perception is that the splash wasn't the answer so maybe an in-house developed thing would work better. But then they're not doing the Analysis step that asks: how do I know if it's successful - the learning.

The Edsel: What Do Results Look Like?


The thing you produce isn't very worthwhile if it doesn't produce results. Ford spent a great deal of money on the Edsel. It failed miserably. No one dared ask the question: did people want this car? Because it was pre-destined by politics. And so it is with any technology in the workplace: if we invest in MLearning we darn well better get some return on investment. 

The moral of this story: invest the time and materials in the Analysis phase first - then envision the deliverable and the form it will take at the end.

Sunday, May 9, 2010

Screen Captures: Dealing with Legacy and Keeping It Confidential



Screen Captures – Keeping It Confidential

Many organizations still actively use so-called legacy systems. The average person recognizes these as the dark (often black) screen and either green or orange bit-mapped letters (raster fonts). These systems are the workhorses or powerhouses that most people do not see today very often but were once the workhorse of business and industry. Today governments (in many countries), the banking industry, telecommunications and the military still rely on these systems. The result is that businesses still need to train on these systems.

Mainframe systems by their nature are very expensive. They were expensive to create, maintain and are expensive to operate. Some planners of these systems built development and even training environments. Others thought the entire process was too expensive and skipped training and simply created development environments.

Today organizations are presented with a need to train learners remotely. Using live systems is risky at best. One alternative is screen capture software such as Camtasia Studio, Articulate (with Video Encoder) and Adobe Captivate.

Those who work with screen capture software understand the complexities in using it. Those who do not use it on a day in or day out basis may not recognize both their benefits and its intrinsic limitations.

One challenge facing those who capture the data is that a training environment may not be available. The result is that two choices must be made – capture from a live system or find another solution of some kind.

Capturing live data requires “scrubbing”. Scrubbing, however, presents many problems. On screens from modern web-based applications and even Windows applications the prospects are difficult but not impossible. That is not the case with these legacy systems. We take for granted Adobe True Type fonts. They proliferate applications today – so much so that we forget that twenty years ago these fonts were not even available because Adobe had not yet created them.

Once the data has been captured data scrubbing begins by covering over the offending data. In Adobe Captivate and in Adobe Photoshop one can take other portions of the screen and mask the data fields with “live” data. Then the mask and the actual file are combined until the data is completely replaced. Provided there is no screen variation this is fairly easy to do. Then the challenges begin. Screen capture software takes snapshots at a rate that produces a certain number of screens per second. A copy and paste technique can be used to make things easier and shorten development time.

The real challenge though, is the font. The legacy system font is bit-mapped. Furthermore, the size of the font once the screen is captured may not equate to standard font sizes. So assume the font was 11 or 12 point font when the screen was captured. Depending upon the screen the developer may or may not be able to match the font. Finding a bit mapped font that is the right shade of green is a huge challenge. It is possible to develop the bit-mapped font, but then all the characters that might be used would need to be developed.

Copying and pasting individual characters is simply not realistic. The results will not be realistic enough to be considered professional. Screens that use any font that can be rendered, especially TrueType fonts, will not be realistic. These are the dominant fonts in use in most technology today.

Recent research based on various keyword searches using the Google search engine resulted in no solutions to this problem using the tools at hand.