::xml madness

Nobody likes XML, except masochists and Microsoft consultants who charge by the hour.
Sometimes you are forced to deal with XML anyway, such as parsing the response from external APIs.
Parsing XML is per se a nasty business to begin with, but in this particular example, the ugliness set new records.

The only “solution” I came up with is too ugly for production use, but the alternatives were even uglier, so I had no option.

I hope there is someone out there reading this who can present a proper solution to the problem.

Let’s say you have this XML:

<testxml xmlns:ns1="http://www.example.com" xmlns:ns2="http://www.example2.com">

Using xpath() you extract the content of testxml:

SELECT xpath(
    '<testxml xmlns:ns1="http://www.example.com" xmlns:ns2="http://www.example2.com"><ns1:foo><ns2:bar>baz</ns2:bar></ns1:foo></testxml>'::xml,
-- Result: <ns1:foo><ns2:bar>baz</ns2:bar></ns1:foo>

The returned XML is not valid since its missing the xmlns definitions,
but the PostgreSQL XML data type doesn’t complain, which is OK I guess,
a bit of quirks mode perhaps?

Because of the missing xmlns, it’s impossible to make use of this XML fragment returned.
You cannot extract any subsequent sub-elements in it using XPath.

For instance, this won’t work:

SELECT xpath(
        '<testxml xmlns:ns1="http://www.example.com" xmlns:ns2="http://www.example2.com"><ns1:foo><ns2:bar>baz</ns2:bar></ns1:foo></testxml>'::xml,

-- Error:
-- ERROR:  could not create XPath object
-- DETAIL:  namespace error : Namespace prefix ns1 on foo is not defined
-- <ns1:foo>
--         ^
-- namespace error : Namespace prefix ns2 on bar is not defined
--   <ns2:bar>baz</ns2:bar>
--           ^
-- Undefined namespace prefix
-- xmlXPathCompiledEval: evaluation failed

Even if you pass it the NSArray in the outer xpath() call, you don’t get ‘baz’ but nothing at all.

SELECT xpath(
        '<testxml xmlns:ns1="http://www.example.com" xmlns:ns2="http://www.example2.com"><ns1:foo><ns2:bar>baz</ns2:bar></ns1:foo></testxml>'::xml,

-- Returns:
--  xpath 
-- -------
--  {}
-- (1 row)
-- And NOT 'baz' which is what we want.

Therefore, given you have an XML “fragment” where the tags have ns:___ but without the xmlns:___=”URI” part,
is there any way to extract sub-elements, given you know the namespaces?

For instance, by wrapping the XML “fragment” inside an outer XML tag,
and specifying bogous xmlns elements for the namespaces,
I managed to hack together a work-around, probably with hundreds of flaws,
and this cannot possibly be the best way to approach this problem.

SELECT xpath_fragment('/ns1:foo/ns2:bar/text()','<ns1:foo><ns2:bar>baz</ns2:bar></ns1:foo>',ARRAY['ns1','ns2']);

(1 row)

Source code of insanely ugly “solution” to the “problem”:

CREATE OR REPLACE FUNCTION xpath_fragment(_XPath text, _XML xml, _NSNames text[]) RETURNS XML[] AS $BODY$
_ text;
_WrappedXML xml;
_NSArray text[][] := ARRAY[]::text[][];

SELECT ('<xml ' || array_to_string(array_agg('xmlns:' || unnest || '=" "'),' ') || '>' || _XML::text || '</xml>')::text INTO _WrappedXML FROM unnest(_NSNames);

FOR _ IN SELECT unnest(_NSNames)
    _NSArray := _NSArray || ARRAY[[_, ' ']];

RETURN xpath('/xml' || _XPath, _WrappedXML, _NSArray);

Ideas, anyone?

PL/pgSQL #1 annoyance

I cannot count the number of times I’ve written the same stupid code,
just to do something as simple as making sure the UPDATE
updates exactly one row.

Last time I complained about this in public was at our Christmas party at the office.
Normally nobody cares to listen to me when I’ve had a few beers and start talking about SQL stuff.
Next day I noticed a colleague had spent the hang-over-morning hacking on a patch.
What an awesome Christmas party I thought!

That was quite some time ago and my code still looks ugly, so let’s complain some more and hope it helps.

Number of ugly statements in our codebase:

$ grep -r -E "(IF NOT FOUND THEN)|(RETURNING (1|TRUE) INTO STRICT)" . | wc -l

In a typical OLTP application, each UPDATE typically affects exactly one row.
If more than one row or no row at all would match, you want to throw an exception, return false,
or what ever the proper behaviour might be in the situation.

This sounds like a trivial task, but it turns out there is no nice way of doing this in plpgsql.

Here are three different work-arounds to ensure the update went through:


Even if FOUND could mean more than one row was affected,
it’s safe to assume exactly one row was updated if you are updating
using a primary key or any other unique column.

UPDATE Orders SET Processed = 1 WHERE OrderID = _OrderID AND Processed = 0;
    RAISE no_data_found;


If you are a bit more paranoid, you can check the exact row count,
and throw different exceptions based on if no rows were found
or if more than one was found.

UPDATE Orders SET Processed = 1 WHERE OrderID = _OrderID AND Processed = 0;
IF _RowCount = 0 THEN
    RAISE no_data_found;
ELSIF _RowCount > 1 THEN
    RAISE too_many_rows;


This is the one I prefer. It guarantees only one row was affected,
and is at the same time a one-liner, or two lines if you could
the DECLARE of the boolean variable.

_OK boolean;
UPDATE Orders SET Processed = 1 WHERE OrderID = _OrderID AND Processed = 0 RETURNING TRUE INTO STRICT _OK;

However, none of them are pretty. Something as simple and frequent should be in the syntax of a language.

The proposed syntax in johto’s patch was:

UPDATE STRICT Orders SET Processed = 1 WHERE OrderID = _OrderID AND Processed = 0;

Tom Lane suggested to put the STRICT keyword in the beginning instead,
avoiding conflicts with table names, which sounds like a good idea to me.

STRICT UPDATE Orders SET Processed = 1 WHERE OrderID = _OrderID AND Processed = 0;

I really hope the community will find a way to fix this syntax bug,
without causing to much pain in terms of backwards compatibility issues for users.

I remember when we upgraded from 8.4 to 9.1, I struggled changing all code where
IN/OUT parameters were in conflict with column names,
as the behaviour changed to raise an error.

Users who didn’t want to invest time fixing their code could simply set the setting plpgsql.variable_conflict in postgresql.conf to get back the old behaviour.

I think this was a very user-friendly and nice way to deal with the compatibility issue.

If the STRICT syntax isn’t possible because of hypothetical compatibility issues,
perhaps it could be up to the user to turn it on or off, using a similar setting?

I would even like to go a bit further and suggest making the update of exactly one row the default in PL/pgSQL.
In the typically fewer cases where you want to update more than one row, could explicitly specify it with some other keyword,
such as ALL UPDATE or whatever. This would make the code much cleaner, as you wouldn’t need to type STRICT
all over your code.

Securing PostgreSQL using hostssl cert clientcert=1

This is not the first time I’ve struggled getting SSL certificate validation to work,
so I thought this time I better write down how I did to avoid future time-waste.

For security and convenience reasons, I want to do the signing of client certificates
on a separate dedicated machine, also known as certificate authority (CA).

This allows us to grant new clients access without having to login to the PostgreSQL
server signing certs or modifying pg_hba.conf.

We will create a special database group called sslcertusers.
All users in this group will be able to connect provided they have a
client certificate signed by the CA.

In the example below, replace “trustly” with the name of your company/organisation.

The instructions are for Ubuntu Linux 12.04 LTS.

Setup CA

The CA should be an offline computer locked in a safe.

Generate CA private key

sudo openssl genrsa -des3 -out /etc/ssl/private/trustly-ca.key 2048
sudo chown root:ssl-cert /etc/ssl/private/trustly-ca.key
sudo chmod 640 /etc/ssl/private/trustly-ca.key

Generate CA public certificate

sudo openssl req -new -x509 -days 3650 \
-subj '/C=SE/ST=Stockholm/L=Stockholm/O=Trustly/CN=trustly' \
-key /etc/ssl/private/trustly-ca.key \
-out /usr/local/share/ca-certificates/trustly-ca.crt
sudo update-ca-certificates

Configure PostgreSQL-server

Generate PostgreSQL-server private key

# Remove default snakeoil certs
sudo rm /var/lib/postgresql/9.1/main/server.key
sudo rm /var/lib/postgresql/9.1/main/server.crt
# Enter a passphrase
sudo -u postgres openssl genrsa -des3 -out /var/lib/postgresql/9.1/main/server.key 2048
# Remove the passphrase
sudo -u postgres openssl rsa -in /var/lib/postgresql/9.1/main/server.key -out /var/lib/postgresql/9.1/main/server.key
sudo -u postgres chmod 400 /var/lib/postgresql/9.1/main/server.key

Request CA to sign PostgreSQL-server key

sudo -u postgres openssl req -new -nodes -key /var/lib/postgresql/9.1/main/server.key -days 3650 -out /tmp/server.csr -subj '/C=SE/ST=Stockholm/L=Stockholm/O=Trustly/CN=postgres'

Sign PostgreSQL-server key with CA private key

sudo openssl x509 -days 3650 \
-req -in /tmp/server.csr \
-CA /usr/local/share/ca-certificates/trustly-ca.crt \
-CAkey /etc/ssl/private/trustly-ca.key -CAcreateserial \
-out /var/lib/postgresql/9.1/main/server.crt
sudo chown postgres:postgres /var/lib/postgresql/9.1/main/server.crt

Create root cert = PostgreSQL-server cert + CA cert

sudo -u postgres sh -c 'cat /var/lib/postgresql/9.1/main/server.crt /etc/ssl/certs/trustly-ca.pem &gt; /var/lib/postgresql/9.1/main/root.crt'
sudo cp /var/lib/postgresql/9.1/main/root.crt /usr/local/share/ca-certificates/trustly-postgresql.crt
sudo update-ca-certificates

Grant access

CREATE GROUP sslcertusers;
ALTER GROUP sslcertusers ADD USER joel;
# /etc/postgresql/9.1/main/pg_hba.conf:
hostssl nameofdatabase +sslcertusers cert clientcert=1

Restart PostgreSQL

sudo service postgresql restart


Copy root cert from PostgreSQL-server

mkdir ~/.postgresql
cp /etc/ssl/certs/trustly-postgresql.pem ~/.postgresql/root.crt

Generate PostgreSQL-client private key

openssl genrsa -des3 -out ~/.postgresql/postgresql.key 1024

# If this is a server, remove the passphrase:
openssl rsa -in ~/.postgresql/postgresql.key -out ~/.postgresql/postgresql.key

Request CA to sign PostgreSQL-client key

# Replace "joel" with username:
openssl req -new -key ~/.postgresql/postgresql.key -out ~/.postgresql/postgresql.csr -subj '/C=SE/ST=Stockholm/L=Stockholm/O=Trustly/CN=joel'
sudo openssl x509 -days 3650 -req -in ~/.postgresql/postgresql.csr -CA /etc/ssl/certs/trustly-ca.pem -CAkey /etc/ssl/private/trustly-ca.key -out ~/.postgresql/postgresql.crt -CAcreateserial
sudo chown joel:joel -R ~/.postgresql
sudo chmod 400 -R ~/.postgresql/postgresql.key


The following files are created/modififed on each machine:


/etc/ssl/certs/trustly-ca.pem -&gt; /usr/local/share/ca-certificates/trustly-ca.crt


/etc/ssl/certs/trustly-ca.pem -&gt; /usr/local/share/ca-certificates/trustly-ca.crt
/etc/ssl/certs/trustly-postgresql.pem -&gt; /usr/local/share/ca-certificates/trustly-postgresql.crt



Secure deployment of PostgreSQL functions

Two people might be simultaneously modifying the same database function. To protect against this scenario, I created a little tool to stage the source code of all functions in pg_proc and the definition of all views. You then deploy your changes, and afterwards you call fdiff() to show a diff of the changes made, by comparing the new content of pg_proc and the definitions of all views. If the changes are what you expected, you go ahead and commit.

We’ve been using this methodology every day in both my current company and my previous one, and it has prevented us from doing mistakes.

Source code and installation instructions: https://github.com/joelonsql/fdiff

This is how to use it:

  1. Start a new transaction
  2. BEGIN;

  3. Temp store state of all views/functions
  4. SELECT fstage();

  5. Deploy changes to schema

  7. Show a diff of changes made
  8. SELECT fdiff();

  9. If the changes are expected, go ahead and commit
  10. COMMIT;

Example on the output from fdiff():

| Updated or replaced functions |

Schema................: public
Name..................: process_order
Argument data types...: _orderid bigint
Result data type......: boolean
Language..............: plpgsql
Type..................: normal
Volatility............: STABLE
Owner.................: amazon
20 c OR Orders.ShippingDate IS NOT NULL
20 c OR Orders.ShippingDate > now() - interval '2 month'

Hope you’ll like it!

Will PostgreSQL 9.5 Bring Back Moore’s Law?

I just watched a presentation from Mar 11, 2012, where Paul Graham / YCombinator is giving away seven billion dollar startup ideas:

  1. A New Search Engine
  2. Replace Email
  3. Replace Universities
  4. Internet Drama
  5. The Next Steve Jobs
  6. Bring Back Moore’s Law
  7. Ongoing Diagnosis

Idea 6 on the list, is basically about the need to invent the sufficiently smart compiler, capable of understanding how to automatically break down a problem (expressed in code) into pieces which can be executed in parallell, in order to utilize GPUs and/or a lot of CPU cores.

Building such a compiler is probably extremely difficult, if at all possible, perhaps mainly due to the complexity possible to express in C, C++, Python, Java or any normal imperative language.

Imagine instead the same problem but for a declarative language like SQL. Would it be equally difficult? Core member Bruce Momjian of the PostgreSQL team have already expressed his ideas on how to tackle the problem.

When thinking about it, the query planner is a bit like a compiler, kind of. It takes the SQL and figures out a smart way to execute it, breaking it down into a lot of sub-tasks, of which some could with benefit be executed in parallel.

So let’s hope the great pg-hackers will pull this off, so we don’t have to pay some future start-up compiler-billionaire license fees to get automatic parallelism.

Announcing the pci-blackbox: PCI-DSS compliant system with 3-D Secure support, built on PostgreSQL and PL/pgSQL

The pci-blackbox aims to simplify the process of becoming PCI-DSS compliant, by handling card data in an isolated system, physically and logically separated from the rest of the system.

flow chart

The idea comes from a guy I met over a beer who works at Skype. He explained they had done something similar, they have/had a special “PCI server” kept secure behind locked doors, which only function was to encrypt/decrypt/process card data.

I thought it sounded like a smart idea, for merchants who for some reason need to become PCI-DSS compliant and cannot use a hosted payment solution.

The company I work for is in the process of implementing card payments, and we cannot use a hosted solution, so we decided to give this concept a shot, and see if we managed to come up with something useful. Hopefully we have, you be the judge.

The pci-blackbox must be run on a separate server from the main system. In this test however, everything is being run on the same machine.

The API consist of three functions:

  • encrypt_card(): Encrypt sensitive card data, return a CardKey.
  • authorise_payment_request(): Authorise payment using a CardKey.
  • authorise_payment_request_3d(): Authorise 3-D Secure payment.


  • 3-D Secure support.
  • Gateway-independent, like Spreedly, the API is generic, not specific to any gateway, currently only Adyen is supported though.
  • Host your own payment page, like Stripe, you design your own payment page, and POST directly to the pci-blackbox.
  • PCI-DSS compliant.
  • Open sourced under the MIT-license.

This module is work in progress and has not been put into production yet.

If anyone know of any other similar open source project, which provides an isolated card-component, please let me know. Couldn’t find any, so that’s why I started hacking on this.

The files under /nonpci are just an example implementation on how to use the pci-blackbox.

The installation instructions only setup a test environment.

Any feedback is very much appreciated, thank you!

Production upgrade from 8.4 to 9.1

At Wednesday, 2012-08-15 05:00 am, the work began. Four hours of downtime later at 09:00 am the upgrade was complete.
Two brand new identical HP DL380 Gen8 servers with 400GB SSD-disks and 192GB RAM are now serving our customers.

This was one of the most nervous moments in my life, a revert would have been impossible once the API was brought back online.
Luckily, it turned out we had done a decent job testing everything, only a few glitches caused some minor problems during the following hours.

Many thanks to Magnus Hagander from Redpill Linpro who helped out with the upgrade.
I had never seen pg_basebackup in action before, impressive, very user-friendly, it even had a nice progress bar!

Trustly Group AB is a very PostgreSQL centric payment company, so a moment like this of course called for some cake and champagne!

PostgreSQL upgrade from 8.4 to 9.1

It’s been very frustrating to see all new cool features in 9.0 and 9.1, while being stuck at 8.4.
What I’ve been looking forward to the most:

  • Streaming replication to off-load work from the master to slaves in Hot Standby mode
  • Synchronous replication to get rid of DRBD
  • Calling functions with named parameters
  • Foreign Key Locks

    In August 2010, I sent an email to pghackers reporting a strange deadlock problem.
    I couldn’t understand how it was possible you could get a deadlock by updating
    the very same row again later in the same transaction.


    I also made a screencast demonstrating the problem:


    Simon Riggs came up with a very clever solution to this problem.
    By introducing a new type of lock, it would be possible to avoid a lot of deadlocks.
    Álvaro Herrera has since then been working on the implementation.
    It turned out to be a lot of work, unfortunately the feature didn’t make it into 9.2.
    I bet this will be one of the most important performance features in 9.3.
    It will greatly improve the performance by increasing the concurrency possible if you have a high transaction load.

    I thought it would be fun to test my old example from 2010 using the latest version of the patch.

    Let’s checkout Álvaro’s latest version of the fklocks patch:

    git remote add fklocks git://github.com/alvherre/postgres.git
    git fetch fklocks
    git checkout fklocks

    AID integer not null,
    Col1 integer,
    BID integer not null,
    AID integer not null,
    Col2 integer,
    -- Process 1:
        -- Process 2:
    UPDATE A SET Col1 = 1 WHERE AID = 1;
        UPDATE B SET Col2 = 2 WHERE BID = 2;
    UPDATE B SET Col2 = 3 WHERE BID = 2;
    -- Process 1 is waiting
        UPDATE B SET Col2 = 4 WHERE BID = 2;
        -- Process 2 is allowed to update the same row
     aid | col1 
       1 |    1
    (1 row)
     bid | aid | col2 
       2 |   1 |    3
    (1 row)

    Both transactions were able to COMMIT successfully, hurray!

    Without this patch, you would get a “deadlock detected” on row 30,
    when Process 2 tries to update the same row again, BID = 2.

    However, I was a bit surprised the final value of Col2 is 3 and not 4,
    as Process 2 updated the row BID = 2 before Process 1,
    and Process 2 updated the row again after Process 1.
    This probably has a logical explanation.

    I hope to have time to do some concurrency stress testing using
    example code and data extracted from our production database.

    Would be interesting to see if my old read-life deadlock scenarios
    are fixed by this patch and how it affects concurrency and performance.

    PostgreSQL 9.3devel – plpgsql_check_function

    One of the biggest incompatibility issues between 9.x and 8.4 is the new behavior of
    PL/pgSQL functions where parameter names are in conflict with columns.

    In 8.4, PL/pgSQL silently use the value of the parameters.

    In 9.x, PL/pgSQL throws an error, but not when doing CREATE FUNCTION,
    only when actually executing the function and hitting a row in conflict.

    This means it is currently impossible to get a list of any functions with this problem.

    You could disable the new behavior by adding #variable_conflict use_variable
    to all existing functions or to the global configuration, but this is sub-optimal,
    it would be much better if the new improved behavior could be used for all functions.

    I asked around on IRC in #postgresql on irc.freenode.net and got lucky!

    A new feature will hopefully be added to 9.3, the function is called plpgsql_check_function developed by Pavel Stěhule.

    plpgsql_check_function will detect any problems you might get during execution,
    not only the errors you get when creating functions.


    ubuntu@pgsql93:~$ psql
    psql (9.3devel)
    Type "help" for help.
    ubuntu=# CREATE OR REPLACE FUNCTION myfunc(id integer) RETURNS TEXT AS $$
    ubuntu$# DECLARE
    ubuntu$# value text;
    ubuntu$# BEGIN
    ubuntu$# SELECT value INTO value FROM mytable WHERE id = id;
    ubuntu$# RETURN value;
    ubuntu$# END;
    ubuntu$# $$ LANGUAGE plpgsql;
    ubuntu=# SELECT plpgsql_check_function(oid) FROM pg_proc WHERE proname = 'myfunc';
     error:42702:5:SQL statement:column reference "value" is ambiguous
     Query: SELECT value            FROM mytable WHERE id = id
     --            ^
     Detail: It could refer to either a PL/pgSQL variable or a table column.
    (4 rows)

    I’m just about to upgrade our production database from 8.4 to 9.1.
    Out of the 1474 functions (1.8 MB of code), 132 functions had some kind of problem.
    Thanks to plpgsql_check_function I didn’t have to go through all the PL/pgSQL functions manually.

    Thank you Pavel! You saved me a lot of work today! 🙂

    Deploying PostgreSQL Stored Procedures

    When you mention stored procedures, I see horror and disgust in the eyes of many developers.

    “Stored Procedures are like crack, they are addictive and harmful.”

    I agree they are addictive, but harmful?

    “They are difficult to version control, test and deploy.”

    This is of course not true.

    Deployments can be tricky though. How can you be sure none of the functions in the production
    database have been locally modified by some evil consultant?
    In a hurry, he did a \ef thinking “I’ll commit this change to the repo later, I won’t forget!”.

    Months later, the function needs to be modified again, this time by someone else, who makes
    the change to the function source code file in the VCS, commits, tests and deploys the function.

    Whops! The local modification made by the evil consultant is gone! Alarms goes off, people are
    screaming “the system is down!”, people are running around in circles, you revert your change,
    back to the previous version in the VCS, but people are screaming even higher “the system is down!”.

    But all the tests passed! How could this happened!

    To prevent this scenario from happening, I’ve made a simple tool which I always use when deploying functions.

    It’s called PgDeploy. I’ve been using it in production successfully since 2008.

    It gives you a second chance to review the diff your deployment would actually cause in the
    production database, by comparing the new source code with what’s already in the database.

    -- SYNOPSIS:
    SELECT deploy(SQL text, MD5 char(32));
    -- Steg 1, preview:
    -- Watch the diff. Get MD5 hash.
    SELECT deploy($SQL$
           -- source code of function
           RETURN TRUE;
        $$ LANGUAGE plpgsql;
    $SQL$, NULL);
    -- The output returned is a diff showing what functions were removed,
    -- created, replaced or updated.
    -- Steg 2, deploy:
    -- Pass MD5 from step 1 as second argument to confirm deployment.
    SELECT deploy($SQL$
           -- source code of function
           RETURN TRUE;
        $$ LANGUAGE plpgsql;

    Below is a usage example, where two developers modify the same function,
    where one of them forgets to commit, causing a difference of
    the source code in the database and in the VCS.

    DAY 1. You deploy a new function:

    SELECT deploy($SQL$
    CREATE OR REPLACE FUNCTION calc_value_added_tax(amount numeric) RETURNS NUMERIC AS $$
    _vat numeric := 0.25;
    RETURN amount * (1 + _vat);
    $$ LANGUAGE plpgsql IMMUTABLE;
    $SQL$, NULL);

    The deploy function returns as text showing any removed,
    new or updated/replaced functions caused by your SQL-code.
    In this case, we can see a new function was created, like expected:

     +-------------------+                           +
     | Removed functions |                           +
     +-------------------+                           +
     +---------------+                               +
     | New functions |                               +
     +---------------+                               +
     Schema................+ public                  +
     Name..................+ calc_value_added_tax    +
     Argument data types...+ amount numeric          +
     Result data type......+ numeric                 +
     Language..............+ plpgsql                 +
     Type..................+ normal                  +
     Volatility............+ IMMUTABLE               +
     Owner.................+ postgres                +
     Source code (chars)...+ 70                      +
     +-------------------------------+               +
     | Updated or replaced functions |               +
     +-------------------------------+               +
     MD5 of changes: 74e9f61aa5a0ce1c333748ff589a3029
    (1 row)

    We will now pass the MD5 from the preview as second argument, confirming we want to deploy.

    SELECT deploy($SQL$
    CREATE OR REPLACE FUNCTION calc_value_added_tax(amount numeric) RETURNS NUMERIC AS $$
    _vat numeric := 0.25;
    RETURN amount * (1 + _vat);
    $$ LANGUAGE plpgsql IMMUTABLE;
    $SQL$, '74e9f61aa5a0ce1c333748ff589a3029');

    The function has now been deployed.

    DAY 2. Function is modified:

    The functions needs to be modified, VAT should not be added for amounts less than 100.
    Some evil consultant is assigned the task and makes the change locally in the database, but forgets to commit to the VCS.
    This causes a discrepancy between the source code in the production database and the VCS. Not good at all!

    CREATE OR REPLACE FUNCTION calc_value_added_tax(amount numeric) RETURNS NUMERIC AS $$
    _vat numeric := 0.25;
    IF amount < 100 THEN
        RETURN amount;
        RETURN amount * (1 + _vat);
    END IF;
    $$ LANGUAGE plpgsql IMMUTABLE;

    DAY 3. You modify the function.

    You are assigned the task to change the VAT from 25% to 12.5%.
    You edit the function in the VCS, and change the single line of code,
    and use the deploy function to make the deployment.

    SELECT deploy($SQL$
    CREATE OR REPLACE FUNCTION calc_value_added_tax(amount numeric) RETURNS NUMERIC AS $$
    _vat numeric := 0.125;
    RETURN amount * (1 + _vat);
    $$ LANGUAGE plpgsql IMMUTABLE;
    $SQL$, NULL);

    In the output from the deploy function you notice the change is not the single line
    you were expecting, but a much bigger change!

     +-------------------+                           +
     | Removed functions |                           +
     +-------------------+                           +
     +---------------+                               +
     | New functions |                               +
     +---------------+                               +
     +-------------------------------+               +
     | Updated or replaced functions |               +
     +-------------------------------+               +
     Schema................: public                  +
     Name..................: calc_value_added_tax    +
     Argument data types...: amount numeric          +
     Result data type......: numeric                 +
     Language..............: plpgsql                 +
     Type..................: normal                  +
     Volatility............: IMMUTABLE               +
     Owner.................: postgres                +
     3 c _vat numeric := 0.25;                       +
     3 c _vat numeric := 0.125;                      +
     5 c IF amount < 100 THEN                     +
     5 c RETURN amount * (1 + _vat);                 +
     6 -     RETURN amount;                          +
     6 -                                             +
     7 - ELSE                                        +
     7 -                                             +
     8 -     RETURN amount * (1 + _vat);             +
     8 -                                             +
     9 - END IF;                                     +
     9 -                                             +
     MD5 of changes: 162d4dcc113345c71f6c9bc4448534aa
    (1 row)

    You realize someone must have made a local change without committing to the VCS!

    The deployment is aborted, the deploy function saved the day!