Restoring Tabs from Sync

September 15th, 2014

I keep forgetting how to do this and searching for the answer on the web just leads you down a horrible twisty maze of passages all looking the same.


Your Google-Chrome session has become discombobulated and when you close the browser all your open tabs get lost. Lots of reasons why this can happen and most of them are not Chrome's fault; last time for me, my disk was full.

When you resolve the issue that caused the lossage, you might be lucky and your open tabs come back automatically. More often than not they disappear and it is necessary to restore the open tabs from your sync account. That should be easy but where on earth is the magic button that does it?


I suspect Google are hiding this option deeper and deeper in the browser config because it is not used very often; fine, but how are you supposed to find it when you DO need it?


  1. Open the settings dropdown
  2. Hover the mouse over "Recent Tabs" till the dialogue pops-up
  3. Choose the "more..." option at the bottom

This should open a page in your browser listing the computers that sync to your account. Beside the name of each computer is a dropdown with an option to Open All; choose that and all your previously saved tabs come back.


Dynamic CSS in Django

August 14th, 2014

For a while I have been thinking of having my Django sites include CSS styling from a database; for instance having different themes that can be switched on and off at will. It took me a while - with some help from Django users & docs - to figure out how to do it.

My eventual solution uses a template file containing variable names where colours should go; the variables get their values generated at run-time from the database.

Designing and populating a database to manage the variables is pretty straight-forward; generating the CSS file is not a common feature it seems hence this posting.

The Template

Suppose we have a template containing CSS which we want to alter dynamically; something like this

#afc-portal-globalnav  {
  background-color: {{ globalnav_background_normal }};
  border: 1px solid {{ globalnav_border_colour }};

#afc-portal-globalnav ul {
  color: {{ globalnav_colour_normal }};

Here we have a few variable names (globalnav_background_normal, globalnav_border_colour, etc) we want to replace with colours picked from our database.

The View

We can create a Functional View which sets-up the page, queries the database, generates from the template and returns the page

In the first part, we make sure the response will be of the appropriate type. We need to do more here to make sure the page will be cached appropriately - we do not want to be generating this file for every query.

def themecss(request):
    # Create the HttpResponse object with the appropriate header.
    response = HttpResponse(content_type='text/css')

Now we can query the database and assemble the key, value pairs for the variables with their colours.

context = {}
for item in Tag.objects.all():
  context[item.tag] = item.colour

And finally we load the template with our variables and return the page.

t = loader.get_template('afc_skin_theme.css')
c = Context(context)
return response

With this in place and our database populated with an appropriate structure, we can define an url to call the view and include this in our regular pages. Without any caching controls, the styling can be changed more or less immediately.

Dead easy when you know how, eh?


Passing CBV data to a Form

March 17th, 2013

I have just started working with Django, building an example application to find my way through the framework. Django seems to be more dynamic than I expected and the web is full of answers to questions I almost asked but help in no way at all.

I got totally stuck on this problem:

using a CBV and form_class, how can I limit a select in the form to a subset determined by a value known in the CBV

Essentially I wanted to pass a value from the View to the Form which could then limit a selector. For example, suppose we have Clients who have Contracts which have Tasks; when adding a Task for a Client we only want to see Contracts for that specific Client.

Here's the basic structure:

class Client(models.Model):
  name = models.CharField()

class Contract(models.Model):
  client = models.ForeignKey(Client)
  name = models.Charfield)

class Task(models.Model):
  contract = models.ForeignKey(Contract)
  notes = models.TextField()

From a detail page for a Client, we want to be able to add a task to a Contract belong to the Client. We provide an URL to provide a form to add the task
url(r'^clients/(?P<cid>\d+)/addtask$', view=TaskCreateView.as_view()),

This will pass a value cid to the class based view (CBV) representing the id of the Client. We want the CBV to pass this value on to the form. Here's how we can do this:
class TaskCreateView(CreateView):
  model = Task
  template_name = 'task_form.html'
  form_class = TaskForm

  def get_form_kwargs(self, **kwargs):
    cid = self.kwargs.get('cid',0)
    kwargs = super(TaskCreateView, self).get_form_kwargs(**kwargs)
    kwargs['initial']['cid'] = cid
    return kwargs

  def get_context_data(self, **kwargs):
    context = super(TaskCreateView, self).get_context_data(**kwargs)
    if 'cid' in self.kwargs:
      context['client'] = get_object_or_404(Client, id=context['cid'])
    return context

Here the get_form_kwargs picks out cid and passes it to the form as a keyword argument. In get_context_data we also pick out cid this time fetching the Client record and returning it in the context so the template can render appropriate information about the Client in the Task form.

The Contract choices can now be specified in the form:
class TaskForm(forms.ModelForm):
  class Meta:
    model = Task

  def __init__(self, *args, **kwargs):
    super(TaskForm, self).__init__(*args, **kwargs)
    cid = kwargs['initial'].get('cid',0)
    if cid:
      self.fields['contract'].choices = ((, for c in Contract.objects.filter(client=cid))

Here, as the form is being initialised, if a cid value has been passed from the View, we can filter the list of Contracts in the select widget


Listing Databases in Postgres

January 22nd, 2013

When working with an SQL database, especially in a development environment, it is easy to lose track of which databases are in which instance. MySQL has a really convenient way of listing its databases with the command


in a client session or even programmatically. But then, structurally MySQL is a good deal simpler - or at least it was - than Postgres.


ZPsycopgDA and no commits

January 17th, 2013

I discovered the solution to this problem a couple of months ago. Goodness knows how I happened upon it but today the same thing occurred. Of course, I had forgotten about the solution, fretting for several hours wondering what was wrong with my code. Best if I write it down this time, eh.

The Problem

Are your updates and inserts in Postgres not being committed?

Your are using a modern Postgres database (8.4 in my case) using ZSQL Methods from Zope or Plone to target the database through ZPsycopgDA and psycopg2. Everything seems to be fine; all your complex reads and queries work great. But when you try to update the database using either update or insert commands, nothing seems to happen. Perhaps you have even been able to verify that the update or insert commands are actually getting through OK.

It just seems as if nothing is getting committed. What is going on?

The Solution

And yes indeed, this is the problem - your transactions are not being committed.

The reason for this is to do with isolation levels provided by psycopg2. In the adaptor (ZPsycopgDA) not all levels were supported and those that were defined were mapped incorrectly. A double whammy.

The problem should have been resolved in psycopg by now (according to this thread from August 2011) but if you still have a broken version of psycopg 2.4.2, you can make the correction by editing the dtml files add.dtml and edit.dtml in the database adaptor. The selector for the Transaction Isolation Level needs modified.

In old, broken versions, it looked like this:

<select name="tilevel:int">
 <option value="1">Read committed</option>
 <option value="2" selected="YES">Serializable</option>

Instead, the selector should read something like this

<select name="tilevel:int">
 <option value="0" <dtml-if expr="tilevel==0">selected="YES"</dtml-if>>Autocommit</option>
 <option value="1" <dtml-if expr="tilevel==1">selected="YES"</dtml-if>>Read Uncommitted</option>
 <option value="2" <dtml-if expr="tilevel==2">selected="YES"</dtml-if>>Read Committed</option>
 <option value="3" <dtml-if expr="tilevel==3">selected="YES"</dtml-if>>Repeatable Read</option>
 <option value="4" <dtml-if expr="tilevel==4">selected="YES"</dtml-if>>Serializable</option>

Once you have restarted your Zope instance you can then access the ZMI and edit the database connector. Choosing “Read Committed” now will set the correct value and your transactions will be committed.


Good news! As of psycopg2 v2.4.6, the distributed ZPsycopgDA has been amended and will now work out-of-the-box and this fix is no longer required. And there is more: the Zope product, ZPsycopgDA, is now available in its own package from github and as an egg on pypi; the availability on pypi should mean it can be included as a requirement in a zope buildout.conf.


Multiple Monitors on XFCE4

December 23rd, 2012

If you have configured so that your display spans multiple monitors, usually when you login to an XFCE session, it will appear as if your monitors are simple clones of one another. You can use an xrandr tool to tweak your setup but if this is not called at an appropriate time in the startup sequence, some functionality may be lost with parts of your display being inaccessible to the mouse pointer.

A better way is to configure XFCE to match your desired display arrangement. However, at present (xfce-settings 4.10), there is no tool available to assist with configuring multiple monitors directly.

The Settings -> Display tool does allow configuration of screen resolution, rotation and enabling individual monitors; warning: using this tool to adjust display settings will reset or lose settings made manually for properties not explicitly offered as buttons in the tool (see below).

The Settings -> Settings Editor allows manipulation of all configuration items in particular the displays settings which are saved in the file displays.xml below


Alternatively, the displays.xml can be edited using your favourite editor.

The main requirement for multiple monitors is their arrangement relative to one another. This can be controlled by setting the Position properties (X and Y) to suit; an (x,y) position of 0,0 corresponds to the top, left position of the monitor array. This is the default position for all monitors and if several monitors are enabled they will appear as a cloned display area extending from this point.

To extend the display area correctly across both monitors:

for side-by-side monitors, set the X property of the rightmost monitor to equal the width of the left-most monitor

for above-and-below monitors, set the Y property of the bottom monitor to equal the height of the upper monitor

for other arrangements, set the X and Y properties of each monitor to correspond to your layout

Measurements are in pixels. As an example, a pair of monitors with nominal dimensions of 1920x1080 which are rotated by 90 and placed side-by-side can be configured with a displays.xml like this:

<channel name="displays" version="1.0">
 <property name="Default" type="empty">
   <property name="VGA-1" type="string" value="Idek Iiyama 23"">
     <property name="Active" type="bool" value="true"/>
     <property name="Resolution" type="string" value="1920x1080"/>
     <property name="RefreshRate" type="double" value="60.000000"/>
     <property name="Rotation" type="int" value="90"/>
     <property name="Reflection" type="string" value="0"/>
     <property name="Primary" type="bool" value="false"/>
     <property name="Position" type="empty">
       <property name="X" type="int" value="0"/>
       <property name="Y" type="int" value="0"/>
   <property name="DVI-0" type="string" value="Digital display">
     <property name="Active" type="bool" value="true"/>
     <property name="Resolution" type="string" value="1920x1080"/>
     <property name="RefreshRate" type="double" value="60.000000"/>
     <property name="Rotation" type="int" value="90"/>
     <property name="Reflection" type="string" value="0"/>
     <property name="Primary" type="bool" value="false"/>
     <property name="Position" type="empty">
       <property name="X" type="int" value="1080"/>
       <property name="Y" type="int" value="0"/>

Usually, editing settings in this way requires a logout/login to action them.

A new method for configuring multiple monitors will be available in the forthcoming xfce-settings 4.12 release.

This posting was originally posted to Arch Linux Wiki.


Arch Linux

October 8th, 2012

Over the years I have used almost every significant Linux distribution available: either in work or at home.

  • Yggdrasil Linux/GNU/Xl
  • Slackware
  • Caldera Linux
  • Red Hat/Fedora and clones like CentOS
  • SuSE and openSuSE
  • Debian and Unbuntu and spinoffs like Linux Mint
  • Arch Linux
Check out DistroWatch for information on many of these and lots more distributions. More than that, at times work has taken me onto just a few of the different flavours of Unix:
  • SunOS, Solaris, OpenSolaris
  • AIX and its friends
  • BSD flavours
  • Xenix and later Sco Unix

Eventually - possibly somewhere in the very late 90's or just into the new millennium - I was able to close down my last remaining Windows system and say goodbye to Microsoft products forever - at least as far as my own home systems were concerned. I certainly do not miss the regular 6-monthly system rebuilds or the frantic virus eradication sessions or the progressive performance degradation or the enforced hardware upgrades. Even now, after all these years, I am still impressed by the multi-year uptime stats reported by my Linux systems.

Up until recently, my distro of choice has been openSuSE. Mostly because of my preferred KDE desktop and the Yast administration tool; both made managing the platform very easy. Most of the time, the openSuSE team have created really good migration tools so that moving from one version to another has been relatively painless.

Migration policy has become more important too, not just with the SuSE team but others as well.

  • the cost of maintaining version repositories has meant that most versions have a lifetime of 18 months or less before their repositories are closed
  • the rate of change of application packages and even core packages such as language versions, window managers, support tools, etc means that a particular OS version rapidly becomes outdated either missing features or forcing an upgrade
I found myself being regularly thwarted by missing repositories and therefore unable to upgrade manually to new package versions since required support libraries were missing. As a result, forced to do without new, improved versions.

Then I discovered Arch Linux. I tested it out on my Asus netbook and liked it so much, it has spread now to all my home systems. The key differentiator compared to most, if not all, distributions is the lack of versions: there is only the current active one.

To be sure, it is not a distribution for everyone; it can be difficult to configure compared to the likes of SuSE and package updates can cause temporary breakages. Still, for me the cost of these inconveniences is far exceeded by the knowledge that my systems are generally as current as they can be and most of all I can exercise more control over what is installed compared to any other distro.

Unlike most other systems, I can also engage directly with the distribution; helping to evolve documentation and even contributing packages. Being able to give something back, however small, for the amazing software countless people continue to contribute to Linux is a pretty awesome experience.

Arch Linux rocks!

plpgsql: INSERT data using RECORD

August 6th, 2012

When inserting records into an SQL database using a existing table as a source usually means we need to know and specify the column structure of the target table. If we are trying to do this using a stored procedure then our script must be changed every-time the table's schema changes.

Here is a way to simplify this job of duplicating records in a Postgres database.

The Use Case

Suppose you have a database with 2 tables.
  • A library table containing details of different book libraries
  • A book table containing references to actual books
So we know all the books in a library and which library has each book.

Now suppose we decide to add a new library and populate it with the same books in an existing library. We could iterate through a select on the books

SELECT * FROM books WHERE id=oldid;

where oldid is the id of the existing library.

And for each record, we want to add a new record using newid as the id of the new library. This is quite awkward; all we want to do is replace the old id with the new but we have to unpack all the columns of the books table and repacked them in a new insert; something like

field1 = book.field1
field2 = book.field2
field3 = book.field3
INSERT INTO books VALUES (newid, field1, field2, field3,.....);

This is quite tedious and if we want to achieve this programmatically means we have to specify the table structure of the books table is such a way that if the schema changes, we have to remember to go back and change this script as well.

But there is a better way:

The Solution

Instead of stating the fields explicitly, we can use the


form provided we can figure a way of replacing the library id. The following plpgsql script illustrates how to do this:

CREATE FUNCTION duplicatebooks (oldid INTEGER, newid INTEGER)
AS $$
   book books;
   FOR book IN SELECT * FROM books WHERE id=oldid LOOP = newid;
$$ LANGUAGE plpgsql;

In this script:

  1. we pass the ids of the libraries, oldid and newid, as parameters to the function
  2. declare a book RECORD variable to contain data on each book
  3. iterate over the books we want to duplicate
  4. change the id of the book record to match the new library
  5. and finally add the new book record
In this way, we do not even need to know the other fields in the book table


In principle, we could have used a generic RECORD variable when iterating over the books table but unfortunately trying to pass a generic RECORD to the INSERT statement is explicitly disallowed and results in the error:

ERROR: record type has not been registered

The solution is to just declare a type for the record at the outset. So instead of having this fragment (which fails):

  book RECORD;
  FOR book IN SELECT * FROM books WHERE id=oldid LOOP = newid;
    INSERT INTO books SELECT book.*;

we do this instead

  book books;
  FOR book IN SELECT * FROM books WHERE id=oldid LOOP = newid;
    INSERT INTO books SELECT book.*;

Storing Plone users in an SQL database

January 10th, 2012

In the past, a couple of <b>Plone</b> sites we have built had large numbers of users for whom it was easier to store their details in an SQL Backend rather than Plone itself. When it came to bringing one of those system up-to-date it was appropriate to review the process and perhaps use a different approach such as a full-on LDAP deployment. During this review process, we learned how to deploy a new product and figured a solution to a possible use case.

Deploying pas.plugins.sqlalchemy

Unfortunately the help documents on the web did not help us install this product and it took some research to discover much simpler instructions buried deep in the plone.users mailinglist ; so deep it remains hidden. Here is what we did:
  1. add pas.plugins.sqlalchemy to the eggs section in buildout.cfg
  2. add pas.plugins.sqlalchemy to the zcml section in buildout.cfg
  3. adding collective.saconnect did not seem to work for us or at least was not helpful so do not install this product
  4. add a definition to the instance section in your buildout.cfg appropriately modified to point to the database you want to use; best to make sure you can get this to connect first
    zcml-additional =
      <configure xmlns=""
        <include package="z3c.saconfig" file="meta.zcml" />
        <db:engine xmlns=""
                   url="mysql://user:password <at> host/database" />
        <db:session xmlns=""
                    engine="pas" />
  5. you can now startup your Plone instance
  6. from the Plone control panel, add the PAS SQL Plugin product: this step should both connect to the database and create the database schema with empty tables
  7. to activate the plugin, access the ZMI and navigate to the acl_users folder and the plugins sub-folder. Review each plugin-type and if an sql option is available, change its precedence to suit your purposes.
Provided you have set the precedence of sql in the User Adder plugins (step 7) you can add new users and they will be stored in your SQL database Notice that the settings in step 4 apply throughout an instance. If you have several Plone instances within a single Zope instance, then each Plone instance with the PAS SQL Plugin activated will share the same user SQL database - as in the Use Case below.

Use Case for multiple sites

Note: the following notes only apply for a group of Plone sites; although the use case may be general, this solution is specific to Plone

Suppose you require a number of related sites all of which relate broadly to the same group of users:
  1. Paid-up members or Active members: assigned as Members and possibly Contributors
  2. Lapsed members: having a login but no role assignment
  3. Various organisational sub-sets of members: assigned the various management roles of Editor, Reviewer, Manager, etc
  4. Anonymous visitors: obviously have no login
In your sites, you want to allow members to be able to logon and authenticate themselves. Then depending on the site and the specific user, various roles can be assigned to allow access to different types of content. When deploying this plugin, user information is stored in the database but user and group permissions remains in Plone. Using this behaviour we can use group definitions to control who can do what in specific sites; all with a minimum of tweaking the Plone instances. First the minimal changes:
  1. set sql to the top of all its relevant plugin types except for Group Management - we want Plone instances to drive this part of user management
  2. set Intranet/Extranet Workflow as the default for each site
  3. for the internally_published state, under the Permissions tab, switch on Authenticated permission for both view and access options (this is optional and depends what you want Lapsed Members to see)
  4. activate the changes for the workflow
  5. in each site, add a membership Group with a roles you want all Active members to have
  6. for each site, add additional groups for any special subsets you may want setting roles as required
  7. add members in one site assigning each to appropriate groups for that site
  8. in additional sites, add members to any groups peculiar to these individual sites
Having done all this, you should now get the following behaviour
  1. Anonymous users: can only see Externally Published content
  2. Lapsed Members: can also see Internally Published material if they are logged in
  3. Active Members: can see all content except items marked as private and whatever additional roles you have assigned
  4. Special Members: certain members will have greater access depending on the roles they have been assigned in each site
These behaviours are easily managed simply by adding users to one or more groups. Further, a new user only needs added in one site and usually Group membership can be assigned at that time (except for Groups which are specific to individual sites). Active Membership can also be controlled by an external application updating the SQL database independent of any Plone instance i.e. for lapsed members, remove their appropriate Group membership via the database. This procedure will affect all related sites immediately without having to do anything else.

Subversion Versioning

October 18th, 2011

When working with a version control repository, it is often useful to know what version of script we are working on or - in a production environment - which version we are actively using. Subversion provides the facility for incorporating the current version number in a script or source file every time it is committed to the repository. Here's how it can be used:

Adding the Subversion revision to a document

This is pretty easy. We just need to set a subversion property on the document or file and make sure a keyword is included in the document where we want the revision number to appear.
  1. to place a revision number in a document or file, include the text item
    where the revision number should appear. This will be replaced by text like
    $Rev: 444$
    the next time you commit your script (the number will be different of course)
  2. to tell subversion to update this text item when we commit, we need to set a property on the file; we can do this with a subversion command like
    svn propset svn:keywords "Rev" 
This is elaborated on in the subversion manual and we can include other items of information using keywords such as date, author or id (a combination of other keywords). There are a couple of issues you need to be aware of:
  1. using the
    key / value
    string as normal content in your document can cause the string to be interpreted as a variable which may need disguising e.g. see the perl and bash scripts below
  2. subversion properties can be fragile and are largely invisible i.e properties cannot be set using wildcards (only by individual filename) and it is not easy to see which files have which properties set

A simple bash script to set properties on all files in a folder might look like this

KEYWORDS="Rev Id Date Author"
SVNPROPS="svn propset svn:keywords"
for FILE in *
    if [ -a $FILE ]

Having this information automatically updated in your subversion content is very useful for documentation purposes, if nothing else.

Using $Rev$ in our version number scheme

Now that your documents or scripts have a revision number embedded in them, how can we use this information to construct a document or script version number using the revision. For instance we could have a version number as 1.2.321 where 321 is a subversion revision number. Generally we need to manipulate this information as a string. Here are a few examples in different languages. First, in python:

__version__ = '0.1.' + '$Rev: 21 $'[6:-2]

In this case we are just treating the revision string as a simple string and slicing it. This will create a version number like 0.1.21, And in perl:

my $Rev=0; my $VERSION="$Rev: 21 $Rev"; my @Ver=split(" ",$VERSION); $VERSION="1.2.$Ver[1]";

Here we use several checks to fool perl: we initialise a $Rev variable so we can include the key-name in a regular string (i.e. rather than being embedded in a comment somewhere); now we can just split the string into parts and pick out the value we want. Simples! to give us a version of 1.2.21. And in bash:

REV="\$Rev: 21 $" REV=${REV/\$Rev: /} REV=${REV// \$/} VERSION="2.3.$REV" 

Here we use some regex logic to strip a revision string of the characters we do not want, to give us a version of 2.3.21

Sadly I cannot claim any of these tricks as my own, nor can I provide the original author's names - they have been lost in the mists of time.

Search for Posts
Popular Tags
Recent Comments
As you mention, it is true that an unregistered record cannot be used in the (Select *) method of setting ...
© 2013 Andy Ferguson