Tuesday, May 31, 2011

Greenhouse 5: part 1


Off to the east of what we call the rose corral sits a greenhouse about half the size of our regular greenhouses.  It was abandoned by a former tenant this Spring.  I jumped on the opportunity to turn it into my own private garden.  North of the power lines, our greenhouses are numbered from west to east beginning with zero. That makes this Greenhouse 5.

My irrigation mainline runs just this side of the front.  It was easy to tap in to provide a source for an indulgence of drip irrigation.  Drip irrigation is like tinker toys for adults: it has all the little snap together parts, but it makes something useful.
Last year had a rip off of a Summer.  Crops of all sorts failed here in Western Oregon because it never got hot.  Our rose propagation here at the nursery was nearly a complete failure.  This year isn't shaping up any better - it's one of the coolest wettest Springs in my memory.  So I'm not leaving the hot weather crops to chance.  I want to grow melons, so I figure I'd better do it in a greenhouse.  The east floor of the greenhouse is dedicated to miniature watermelons.

The center bench is for my peppers.  These are mostly variants of bell peppers with a few interesting hot ones is the rear. I've found that as I've gotten older, I have less tolerance to the really hot peppers that I used to grow long ago.
I've gotten frustrated over the years that slugs, birds and nutria eat our strawberries before we do.  So I've got eight hanging pots dedicated to keeping my strawberries unmolested.
This will be a big tomato year - all of our tomato plants will be in greenhouses.  We won't be having any more of this waiting until mid October for tomatoes to ripen.  The front windows of the greenhouse are crowded with tomatoes.  Some a just starts, others that we moved from one of the other greenhouses are already starting to show fruit.
In the northeast corner there is a huge volunteer flowering tobacco.  It is heavily fragrant in the evenings - in the closed quarters of the greenhouse, it is nearly too much.
Beyond the front glass is a field of squash being grown under protective row covers.  To the right, but not yet visible are long rows of corn.

I'm excited to have had the time over the Memorial Day weekend to set up this gift of a greenhouse.  I'm going to enjoy watching this place turn into an overgrown riot over the next four months.  I'll post more photos as it grows.

Sunday, May 29, 2011

The Long Road to Electric: part 2

(this blog post was written two weeks ago and for some reason, it was never published - so better late than never)

We've had the electric car for just over two weeks.  So far, it has been truly amazing.  We bought it knowing full well that it wouldn't be a car that we took on road trips.  It's purpose was for puttering about in town.  It does that brilliantly and very inexpensively.

In a typical week, we drive about a hundred miles.  The Subaru Outback, never a particularly economical car, would get about twenty-one miles per gallon for in town driving.  I'll be generous in my calculation and call it twenty-five miles per gallon.  With the cost of gas being just under four dollars a gallon, that's about sixteen dollars for weeks worth of driving.

The electric car is doing about six miles per kilowatt hour.  At a cost of just under ten cents per kilowatt hour (sorry Californians, we're all hydro around here and electricity is cheap), a week's worth of driving is costing us under two dollars.

Yesterday, we decided to experiment and drive to Eugene and back: about eighty miles.  The electric car gets its best mileage at city speeds, not highway speeds.  While we can get close to a hundred miles of range in town, on the highway, it doesn't do so well.  We knew we'd have to recharge in Eugene.

The trip down to Eugene took about eighty percent of a full charge.  We parked the car at Lithia Nissan of Eugene, plugged it in and then walked downtown.  We visited the Saturday Market and several shops.  About three hours later, we wandered back to the car and found it at eighty percent of full charge.  It displayed an estimated range of seventy miles.

We drove home.  It was as simple as that.  We arrived in Corvallis with fifteen percent of full charge, ran a couple quick errands in town and then home.  The electricity cost to us for the trip was less than two dollars.  One of the benefits of early adoption of this technology is that charging stations are usually free.  While the car charged in Eugene, we were not charged at all.

Why did it use less power to drive home?  There are a lot of variables that have noticeable effects: head winds, and elevation gains/loss, etc.  When driving to Eugene, we were heading toward a storm.  On the way back, there were no storms.  Could it be that Eugene at 426 feet versus Corvallis at 228 feet would make much difference?  I don't know, but it surely is a contributing factor.

In all, it wasn't as convenient as using the Subaru would have been.  We had to park at the charging station and then walk to our destination.  We didn't have the spare power to drive around in Eugene, the car had to spend the time recharging for the trip home.  Overall, however, it was a good experience, though a different one than the Subaru would have given us.

My conclusion from this?  I still get to be smug.

Monday, May 23, 2011

Configuration - Part 3

Okay,  in installment number three, we're going to get to something interesting.  I made a statement in the first part of this series about programming having become more about configuration and less about algorithms.  I used Java's XML madness as an example.  When working in that world, I really loathed it -- but I understand it. It is a powerful concept and the basis for dependency injection.  Here's how to do it with the ConfigurationManager.

    import config_manager as cm
    n = cm.Namespace()
    n.option('storageClass',

             doc='a class name for storage',
             default='socorro.storage.crashstorage.HBaseCrashStorage',
             from_string_converter=cm.class_converter)
    conf_man = cm.ConfigurationManager([n],
                                       application_name='sample')
    config = conf_man.get_config()
    print config.storageClass


On invocation, the ConfigurationManager will take the 'storageClass' option, overlay any replacement values from the environment, config files and the command line, then dynamically load the module and finally assign the resultant class to the key 'storageClass' in the mapping 'config'.

    $ python sample.py --storageClass=socorro.storage.crashstorage.LegacyCrashStorage
    <class socorro.storage.crashstorage.DatabaseStorage>


We can dynamically load classes from modules, which means that we can select and instantiate objects at runtime.  Programs that use this technique can instantiate and use objects that weren't even conceived of when the programs were originally written.  However, this is just the first step.

In the example, the class HBaseCrashStorage has some configuration requirements of its own.  For example, to open a connection to HBase, we need  'host', 'port' and 'timeout'.  Since the program doesn't know ahead of time what class it will be loading, it can't know ahead of time what config parameters it's going to need.  The class itself is going to have to cooperate and inform the configuration manager of its needs.

On dynamically loading a module containing a desired class, the ConfigurationManager interrogates the class for its configuration needs by invoking a function called 'get_config_requirements'.  If the class is equipped to respond, it will return a list of Options.  For example, the HBaseStorage class returns a list defined like this:

    rc = cm.Namespace()
    rc.option(name='hbaseHost',
              doc='Hostname for HBase/Hadoop cluster. May be a VIP or '
                  'load balancer',
              default='localhost')
    rc.option(name='hbasePort',
              doc='HBase port number',
              default=9090)
    rc.option(name='hbaseTimeout',
              doc='timeout in milliseconds for an HBase connection',
              default=5000)


How can this work? By the time the ConfigurationManager has loaded the module, isn't it too late to apply new configuration variables?  Well, yes, but the ConfigurationManager knows that if it has loaded a class, that it had better do the whole overlay of value sources again.  Maybe that second overlay will dynamically load more classes, forcing the ConfigurationManger to overlay a third time.  In fact, the ConfigurationManager will repeat the overlaying until it knows that it hasn't loaded any new classes.

Here's the help output for the default run of the sample2.py program:

     $ python sample2.py --help
     sample
         --_write
             write config file to stdout (conf, ini, json) (default: None)
         --config_path
             path for config file (not the filename) (default: ./)
         --hbaseHost
             Hostname for HBase/Hadoop cluster. May be a VIP or load balancer (default: localhost)
         --hbasePort
             HBase port number (default: 9090)
         --hbaseTimeout
             timeout in milliseconds for an HBase connection (default: 5000)
         --help
             print this
         --storageClass
            
a class name for storage (default: socorro.storage.crashstorage.HBaseCrashStorage)

Now we run it with and change the 'storageClass' on the command line:
   
     $ python sample2.py --help   --storageClass=socorro.storage.crashstorage.LegacyCrashStorage
     sample
         --_write
             write config file to stdout (conf, ini, json) (default: None)
         --config_path
             path for config file (not the filename) (default: ./)
         --deferredStorageRoot
             a file system root for crash storage (default: ./def/)
         --dirPermissions
             the permissions to use in creating directories (decimal) (default: 504)
         --dumpDirCount
             the max number of crashes that can be stored in any single directory (default: 1000)
         --dumpFileSuffix
             the file extention for dump files (default: dump)
         --dumpGID
             the GID to use when storing crashes (leave blank for file system default) (default: None)
         --dumpPermissions
             the permissions to use in storing crashes (decimal) (default: 432)
         --help
             print this
         --jsonFileSuffix
             the file extention for json files (default: json)
         --processedStorageRoot
             a file system root for crash storage (default: ./pro/)
         --storageClass
            
a class name for storage (default: socorro.storage.crashstorage.LegacyCrashStorage)
         --storageRoot
             a file system root for crash storage (default: ./std/)


This time, the help output looks very different because the requirements of the LegacyCrashStorage class were much more extensive than the requirements of the HBaseCrashStorage class. 

Just like any config parameter, the 'classStorage' can be overriden in the environment, an ini, conf or json file, the command line or whatever source you can think of.

In the examples that I've given here, the HbaseCrashStorage and LegacyCrashStorage classes derive from a common base class.  The configuration manger module defines a mix-in base class called 'RequiredConfig' that provides some structure for hierarchical discovery of required configuration parameters.  Classes that derive from this base only need to define a class level Namespace (or dict) called 'required_config'.  The base class provides the method for walking the inheritance tree and collecting all the Options into one Namespace.

The examples that I've shown here have used classes, but the option could specify just a module.  I can see this being used to, perhaps, switch an application between using Postgres and MySQL.  I'm even using it in unit testing to 'inject' mock objects into instances of classes.

The next topic in this series?  Nested namespaces.

Tuesday, May 17, 2011

Configuration - Part 2

The configuration manger that I spoke of in Part 1 has the task of merging configuration information from a bunch of separate sources.  In this posting, I'm going to talk about how it does the merging and then expound on some the benefits of the technique.

First, the configuration manager needs to be informed of what configuration options its going to be working with.  This is done by passing in mappings of names to Option objects.  The option objects are just definitions of name, documentation, default value and a reference to a string conversion function.

The ConfigurationManager accepts a list of these objects as the first parameter to the constructor.  In the example from the last posting, I use a Namespace object to hold a several Options.  The Namespace object is just a dict with some syntactic sugar that allows lookups using dot notation.  Any mapping will work fine, including the builtin dict.  The list of mappings is merged into a single master mapping within the ConfigurationManager instances from left to right.  If there is a conflict in the members of the list for the same name, then the right most entry wins.

    import config_manager as cm
    n = cm.Namespace()
    n.option(name='datetime',
             doc='the date and time to process',
             default='2011-05-04 15:10',
             from_string_converter=cm.datetime_converter)
    conf_man = cm.ConfigurationManager([n],
                                       application_name='sample')

The 'default' value within the Option objects provides the ConfigurationManager with the first cut of value for any given option.  From there, it overlays new values for these parameters from each of its successive config information sources.  By default, it looks to the environment, then any .conf, .ini or .json files that it knows about.  Finally, it gets values from the command line as the last place for configuration values.  Should those sources prove to be insufficient or the order isn't right, you can use an optional second parameter.  Remember the key/value pairs can be from any mapping object.

Taking the previous example as the base, let's say we've got an ini file that looks like this:

    $ cat ./sample.ini
    [top_level]
    # name:
datetime     # doc: the date and time to process     
    # converter: socorro.lib.config_manager.datetime_converter     
    datetime=1959-11-13 06:12

And then we invoke the program like this:

    $ export datetime='1921-01-18 09:45'
    $ python ./sample.py --datetime='1770-12-16 20:21'


The value for 'datetime' starts at '2011-05-04 15:10' as specified by the 'default' in the Option definition.  The environment (from os.env) is applied and the value becomes '1921-01-18 09:45'.  Next the ini is applied and 'datetime' changes to '1959-11-13 06:12'.  Finally, the command line value is overlaid and the value becomes '1770-12-16 20:21'

It can be referenced like this:

    config = conf_man.get_config()
    print config.datetime

This will print the last value assigned to 'datetime'.  The conversion function will have already been applied, so the final value type will be datetime.datetime.

In the next posting on this topic, I show how to dynamically load classes using this  configuration.

Monday, May 16, 2011

Configuration - Part 1

I've been obsessed with configuration lately.  Over the course of my career, I've noticed that there is less and less programming to do, corresponding to more and more configuration to do.  Why create a complicated program from scratch when you could take existing programs and modules and configure them to solve your problem?

One the places where I have noted this trend is in the Java world.  It seems there are millions of pre-built Java components out there.  The trick is not to create new ones, but to configure the existing ones to accomplish the task at hand.  Java programming has become less about the language and more about manipulating huge undecipherable XML files.

In Python, we've got a number of tools to help get configuration information from various sources: getopt, argparse, ConfigParse, ConfigObj, etc.  To my eyes, however, these tools only provide partial implementations of configuration because they focus only on one dimension of the problem.

What is configuration, anyway?  I define as the set of values that remain constant throughout runtime, but can be varied prior to run time.  There's lots of ways to get configuration information: ini files, flat config files, the command line, operating system environment, json files, XML, etc.  I've not found an overall system that can gather configuration information from all these sources.

Back in '05, when I was at the OSUOSL, I took my first swipe at this grand unified configuration manager.  Six years later, I'm using the latest generation of the tool in the Socorro project.

It works like this: you define the configuration parameters required by a program using a homebrewed definition language consisting of Namespaces and Options.  Options are objects that define a single configuration parameter: name, documentation, default value and a function that can convert a string into the appropriate type.  A Namespace is just a collection of options. The end result is a dictionary of key value pairs accessible using dot notation.

    import config_manager as cm
    n = cm.Namespace()
    n.option(name='host',
             doc='the host name',
             default='localhost')
    n.option(name='debug',
             doc='use debug mode',
             default=False,
             short_form='D',
             from_string_converter=cm.boolean_converter)
    conf_man = cm.ConfigurationManager([n],
                                       application_name='sample')
    config = conf_man.get_config()
    print config.host
    print config.debug
 
If this program were saved as sample.py and just run from the command line, it would print this:

    $ python ./sample.py
    localhost
    False


Invoke it like this and you'll get:

    $ python ./sample.py --help
    sample
      --_write
        write config file to stdout (conf, ini, json) (default: None)
      --config_path
        path for config file (not the filename) (default: ./)
      -D, --debug
        use debug mode
      --help
        print this
      --host
        the host name (default: localhost)


This exposes some hidden options.  Lets say that we want an ini file for this program.  We can get the script to write it for us:

    $ python ./sample.py --_write=ini
    $ cat ./sample.ini
    [top_level]
    # name: debug
    # doc: use debug mode
    # converter: socorro.lib.config_manager.boolean_converter
    debug=False

    # name: host
    # doc: the host name
    # converter: str
    host=localhost


Just as easily, you could have written a flat conf file, json file, or, Lord help us, XML.  You can now edit your ini file to your heart's content.  This isn't a reimplementation of the ini support in Python, under the covers, ConfigurationManager is using the existing ConfigParse module.  So go ahead and use all the macro and substitution features available in that module.

The changes that you make to the values in this file become the new "defaults" reported by the --help feature.

In my next post on this topic, I'll show how the configuration manager overlays values from various sources to produce its final configuration values.

This class is currently in an experimental branch of the Socorro SVN tree.  I'll announce when it gets merged into trunk and is readily available.

Sunday, May 15, 2011

The Long Road to Electric: part 1


Well, I got my Nissan Leaf on Friday the thirteenth.  The most astounding thing about this all electric car is that it is just a car.  Yeah, it's got a dashboard display from an eighties Star Trek episode, but so do a lot of cars these days.  Aside from attracting attention, it has, in the last forty-eight hours, acted just like any other new car that I've ever had: astonishingly normal.

Prior to deciding to purchase this car, I audited my own driving habits.  I drove, on average, only every other day.  Those trips overwhelmingly spanned less than thirty miles.  About once every two months, I have a two hundred mile day.  About once a year, I have fifteen hundred mile trip.

I realized that I was a perfect candidate for going zero emission for most of my driving.  Yes, I know zero emission doesn't mean zero environmental impact.  For most of my driving, I'll use the Leaf.  For the rest, I'll borrow/rent a car, take the train, or ride my motorcycle.

I'm really interested in watching how it performs: how long, how far, and for how much?  I will be tracking its data on every use.  It's charger is metered all by itself, so I can get great power usage data.  I'll be commenting on my analysis as it happens.

We drove about sixty miles in the first twenty-four hours.  That included about ten miles at sixty-five and twenty at fifty-five.  On returning home in the afternoon, the car reported that it had twenty-miles of range left.  It said that it would take sixteen hours on a 120V charger to return to full charge.  By morning, it was fully charged.

Yes, a car with this range suits my needs nicely.  I regret that I will be insufferable for the next few weeks.

Monday, May 09, 2011

Ubuntu 11.04 Review - Day 2

Ok, today I figured out that I could choose "Ubuntu Classic" as my UI.  Good-bye Unity.  I hope that next time I see you, you will have grown up and gained some flexibility.  Until such time, I don't want to see you.

I downloaded the KDE version of Suse yesterday.  It's time I took a look at what they're doing these days.  Before becoming a Ubuntu fanboy, I was deep in the Suse KDE camp.  I jumped ship there when KDE 4 came out.  I installed it on a VMware Workstation virtual machine on my main workstation.  I'll setup an instance of my work software development environment there and see how it works.

The conclusion that I draw from all of this is that I'm a disloyal fanboy. 

There will be further Ubuntu 11.04 Review fragments as I spend more time with this machine.

Sunday, May 08, 2011

Ubuntu 11.04 Review - Day 1

I think my run with using Ubuntu as the default distro on the machines around the farm is done.  I upgraded one of the 10.10 machines and wasn't pleased with what I got out of it.

I understand what Canonical is doing.  There seems to be a trend toward simplification of user interfaces as a way of making them more able to scale upward from small mobile devices.  Unfortunately, I don't think that sacrificing flexibility on the high end is going to win me over to the new interface.

I recently turned in my MacBook at work because, after two years, I could not make peace with the deficiencies of the Mac OS/X User Interface.  I'm dismayed that Ubuntu is following Apple's lame user interface, warts and all.

One of the most important features to me is the concept of workspaces.  I treat each one as a different room in my home.  I have a workshop with hand tools and an office for paperwork in the same manner that I've got separate desktop workspaces for software development and accounting.  Some tools are present in multiple rooms of my home, like, for example, scissors.  Imagine the confusion if while in the workshop picking up the scissors meant that you're suddenly teleported to the sewing room.

In effect, that's what Ubuntu has in its new implementation of workspaces.  If I select a tool from within a workspace that hadn't originally opened the tool, I find myself suddenly transported to whatever workspace did open it.  That is the same broken behavior that Mac OS/X does by default with Spaces.

The second way that Ubuntu has copied a questionable Apple "feature" is the use of a single menu bar at the top of the screen.  The main menu for the app that currently has the focus is at the top of the screen rather than the top of the window.  I can see that this would be acceptable if the screen were really tiny.  It just doesn't scale well for large screens.

I often keep several apps open and tiled on the screen.   For apps that happen to be near the bottom of the screen, having to move the mouse all the way to the top on my thirty-one inch screen just slows me down.

I recognize that I am not a typical user, but I'm still not pleased to see Ubuntu rush to the least common denominator.  Hopefully, they will introduce, or reintroduce, some features as the new Unity interface matures.

I'm going to leave Ubuntu 11.04 on this secondary machine.  I'll comment more as I use it.  However, I'm really grateful that I didn't upgrade my main workstation.

does it do code?

This is a test to see if Blogger can handle code.  For example, this is what I'm talking about:
for i in range(10):
    print i
That was an example of Python code. I wonder if this could be extended for syntax highlighting.

Here's some Lisp that I wrote in 1987:
;;; This routine returns a list of states through which a path
;;; exists from the start to the goal
(defun extract-path (n hash)
  (let ((pred (gethash n hash)))
    (cond ((eq pred t) (list n))
          ((eq pred nil) (msg "Something horrible has happened" N))
          (t (cons n (extract-path pred hash))))))
Here's some C++ from 1991:
IntegerIndividual0::IntegerIndividual0 (void) : IntegerParameterIndividual()

  if (!Usage++)
  {
    MutationSize = ((DoubleIterator*)IntegerIndividual0IT)->GetValue();
    MutationRange = new int[NumberOfParameters];
    if (!MutationRange)
    {
      fprintf (stderr, "Out of Memory!!!\n");
      exit (-1);
    }
    for (int i = 0; i < NumberOfParameters; i++)
      MutationRange[i] = (int)(IntRanges[i] * MutationSize / 2.0);
  }
}
Now it would nice to figure out a way to more automatically apply styles to a code fragment block.

What is up with this editor?  Some times pressing the <enter> key results in the cursor heading back to the beginning of a paragraph instead of going to a new line.  It seems to properly insert the newline, but cursor just doesn't got where I'd expect it to.

is this thing on?

This is a test to see if this blog still works.

I've been in the market for a new blogging platform for several months.  I don't want to go to the bother of hosting it myself.  I've been on LiveJournal for years, but I wanted a fresh start.  I'm planning on writing on technical topics rather than personal ones.  LiveJournal just doesn't seem like the place for that.

Several people I know suggested that I try BlogSpot.  It turns out that I already have an account.  Sometime in 2005, I actually started here, but never followed through.  I don't expect many accounts that have been idle for six years suddenly wake up.

My regret here is that I cannot get 'twobraids.blogspot.com' and must resign myself to '2braids' instead.  Perhaps I'll look into getting some type of custom domain.

So here I go, I hope it won't be another six years before I post again...