Tuesday, May 5, 2015

Working with Salesforce's destructiveChanges.xml

If you've ever had a need to remove a bunch of custom objects, fields, pages, classes, etc. from an org, or from multiple orgs you've probably come across documentation about destructiveChanges.xml.  If you're familiar with developing on the Salesforce platform using Maven's Mate or Eclipse, you're probably already familiar with package.xml.  Both files have nearly identically formats.  The difference between them is package.xml enumerates the stuff you want to synchronize between your org and your development environment and destructiveChanges.xml enumerates the items you want to obliterate (or delete) from whatever org you point it at.


The easiest way to see how they're identical is to look at what each of them looks like empty.

package.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
</Package>

destructiveChanges.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
</Package>

The only difference between them is destructiveChanges doesn't have a <version> tag.

Let's look again after we add a class to each.  In package.xml we're synchronizing a class and in destructiveChanges.xml its a class we want to remove from our org.

package.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
    <types>
        <members>TomTest</members>
        <name>ApexClass</name>
    <types>
</Package>

destructiveChanges.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>TomTest</members>
        <name>ApexClass</name>
    <types>
</Package>

As a percentage, the two files are more similar now than they were before. The only difference between them is still the <version> tag.

Executing destructive changes

So how do we execute destructive changes?  The short answer is using Salesforce's migration tool.  In a few minutes we'll execute "ant undeployCode," but we've a few items to take care of first.

For me, the first problem was where to put the files destructiveChanges.xml and package.xml. The former is new and the latter is NOT the same file that usually appears in the src/ directory.

At Xede, we create git repositories for our projects.  Each repository is forked from xede-sf-template.

DrozBook:git tgagne$ ls -lR xede-sf-template
total 16
-rw-r--r--  1 tgagne  staff   684 Aug 21  2014 README.md
-rwxr-xr-x  1 tgagne  staff  1430 Sep 17  2014 build.xml
drwxr-xr-x  4 tgagne  staff   136 Aug 21  2014 del

xede-sf-template//del:
total 16
-rwxr-xr-x  1 tgagne  staff  563 Jan 17  2014 destructiveChanges.xml
-rw-r--r--  1 tgagne  staff  136 Aug 21  2014 package.xml

The repo includes a directory named "del" (not very imaginative) and inside it are the files destructiveChanges.xml and package.xml.  It seems odd to me, but the migration tool requires both the destructiveChanges.xml AND a package.xml to reside there.

The package.xml file is the same empty version as before.  But the template's destructiveChanges.xml contains placeholders--but still basically does nothing.

DrozBook:xede-sf-template tgagne$ cat del/package.xml
<package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
</package>

DrozBook:xede-sf-template tgagne$ cat del/destructiveChanges.xml 
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <name>ApexClass</name>
    </types>
    <types>
        <name>ApexComponent</name>
    </types>
    <types>
        <name>ApexPage</name>
    </types>
    <types>
        <name>ApexTrigger</name>
    </types>
    <types>
        <name>CustomObject</name>
    </types>
    <types>
        <name>Flow</name>
    </types>
    <types>
        <name>StaticResource</name>
    </types>
    <types>
        <name>Workflow</name>
    </types>
</Package>

Now that we have a directory with both files in it, and we have versions of those files that basically do nothing, let's get ready to run the tool.

There's one more file we need to create that's required by the tool, build.xml.  If you're not already using it for deployments you're likely not using it at all.  My version of build.xml is in the parent of del/.  You can see it above in the directory listing of xede-sf-template.

DrozBook:xede-sf-template tgagne$ cat build.xml
<project name="xede-sf-template" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">

    <property environment="env"/>

    <target name="undeployCode">
      <sf:deploy 
 username="${env.SFUSER}" 
 password="${env.SFPASS}" 
 serverurl="${env.SFURL}" 
 maxPoll="${env.SFPOLL}" 
 ignoreWarnings="true"
 checkOnly="${env.CHECKONLY}"
 runAllTests="${env.RUNALLTESTS}"
        deployRoot="del"/>
    </target>

</project>
If 
Since build.xml is in the parent directory to del/ the "deployRoot" attribute is "del," the subdirectory.

The environment property (<property environment.../>) allows operating system environment variables to be substituted inside your build.xml.  In the example above, the environment variables are about what you'd expect them to be (using the bash shell):

export SFUSER=myusername
export SFPASS=mysecretpassword
export SFURL=https://login.salesforce.com (or https://test.salesforce.com)
export SFPOLL=120
export CHECKONLY=false
export RUNALLTESTS=false

Right about now you may be thinking, "Who wants to set all those environment variables?" Truthfully, I don't.  That's why I created a little script to do it for me called "build."  But before we get into that let's just edit our build.xml file so it doesn't need environment variables.

The build.xml below is for a production org.

DrozBook:xede-sf-template tgagne$ cat build.xml
<project name="xede-sf-template" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">

    <target name="undeployCode">
      <sf:deploy 
 username="tgagne+customer@xede.com" 
 password="mysupersecretpassword" 
 serverurl="https://login.salesforce.com" 
 maxPoll="120" 
 ignoreWarnings="true"
 checkOnly="false"
 runAllTests="false"
        deployRoot="del"/>
    </target>

</project>

So now we have our build.xml, our del directory, del/destructiveChanges.xml which lists nothing and an empty del/package.xml file.  Let's run ant.

DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k0SKAQ
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k0SKAQ successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

As you can see, it did nothing.  Let's give it something to do, but make it a class that doesn't exist in the target org.

DrozBook:xede-sf-template tgagne$ cat del/destructiveChanges.xml
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>DoesNotExist</members>
        <name>ApexClass</name>
    </types>
    ... same as before ...
</Package>

I've added a single class, DoesNotExist, to the ApexClass types list and we'll run it again.

DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k0mKAA
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] All warnings:
[sf:deploy] 1.  destructiveChanges.xml -- Warning: No ApexClass named: DoesNotExist found
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k0mKAA successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

Ant (with the migration tool plugin) is telling us it tried removing the Apex class "DoesNotExist" but it didn't exist.  If the class had existed before but had already been removed this is the message it would display.

As a reader exercise, go ahead and create a class "DoesNotExist" in your org.  I went into Setup->Classes->New and entered "public class DoesNotExist{}". It's about as useless a class as you can create, though I've seen and perhaps written worse.

If you run ant again you'll see it doesn't report an error.
DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k11KAA
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k11KAA successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

And there you have it!  For a little extra I'll share my "build" script which makes it pretty easy to extract, undeploy (what we just did) and deploy code with or without tests or verification-only.

Tuesday, November 18, 2014

JWT Bearer token flow can be used for community users -- example in cURL

Abstract

Problem
How can community users authenticate to Salesforce via the API without having to give their permission?
Answer
Use the JWT Bearer Token Flow
Disclaimer
I was going to wait a while longer before posting this to make sure it was beautifully formatted and brilliantly written--but that wouldn't have helped anyone trying to solve this problem in the meantime (like I was a few weeks back).

So in the spirit of both this blog's name and agile development, I'm publishing it early, perhaps not often, but hopefully in-time for another developer.
Attributions
Thanks to Jim Rae (@JimRae2009) for suggesting this approach, inspired by his work integrating canvas with node.js on the desktop, and his related Dreamforce 2014 presentation.

Background

A client of ours has an existing, non-salesforce, website with LOTS (tens of thousands) of users.  The client also has a Salesforce ServiceCloud instance they use for all their customer support, and they wanted their customers to interact with the CRM through their website, without iframes or exposing the SF portal to their users. 
The solution is to use JWT Bearer Token Flow.  Salesforce does not support username/password authorization for community-licensed users, and the other OAuth flows require a browser to intermediate between two domains. 
Though the document above does a good job describing the flow, it's a little weak on specifics.  Luckily, there's a reference Apex implementation on github (salesforceidentity/jwt), and below I'll provide a reference implementation using cURL.

Configuring your connected app

But before starting, there's a few things to know about your connected app.

  1. Your connected app must be created inside the target Salesforce instance.  You cannot re-use the same consumer and client values across orgs unless your app is part of a package.
  2. Your connected app must also use digital signatures.  This will require creating a certificate and private key.  Openssl commands for doing this appear later in this article.
  3. You must set the "admin approved users are pre-authorized" permitted users option to avoid login errors.

Configuring your community

  1. The community profile must allow access to the Apex classes that implement your REST interface
  2. Each community user will require the "API Enabled" permission.  This cannot be specified at the profile-level.

Creating the certificate

A single openssl command can create your private key and a self-signed certificate.
openssl req \
    -subj "/C=US/ST=MI/L=Troy/O=Xede Consulting Group, Inc./CN=xede.com" \
    -newkey rsa:2048 -nodes -keyout private.key \
    -x509 -days 3650 -out public.crt
Substitute your own values for the -subj parameter.  It's a self-signed certificate so no one will believe you anyway.  The benefit of using the -subj parameter is to avoid answer the questions about the certificate interactively.
The file "public.crt" is the certificate to load into your connected app on Salesforce.

Creating a community user

If you already have a community user you can skip to the next section.  If you don't, you will need to create one to test with. 
Make sure the user that creates the community user (either from a Contact or an Account if person-accounts are enabled) has a role.  Salesforce will complain when the Contact is enabled for login if the creating user doesn't have a role.

cURL Example

DrozBook:scripts tgagne$ cat jwlogin
#!/bin/bash

if [ $# -lt 2 ]; then
 echo 1>&2 "usage: $0 username sandbox"
 exit 2
fi

export LOGINURL=https://yourportalsite.force.com/optional-part
export CLIENTID='3MVG9Gmy2zm....value from connected-app...OQzJzb4BF469Fkip'

#from https://help.salesforce.com/HTViewHelpDoc?id=remoteaccess_oauth_jwt_flow.htm#create_token

#step 1
jwtheader='{ "alg" : "RS256" }'

#step 2
jwtheader64=`echo -n $jwtheader | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

timenow=`date +%s`
expires=`expr $timenow + 300`

#step3
claims=`printf '{ "iat":%s, "iss":"%s", "aud":"%s", "prn":"%s", "exp":%s }' $timenow $CLIENTID $LOGINURL $1 $expires`

#step4
claims64=`echo -n $claims | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

#step5
token=`printf '%s.%s' $jwtheader64 $claims64`

#step6
signature=`echo -n $token | openssl dgst -sha256 -binary -sign private.key | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

#step7
bigstring=`printf '%s.%s' $token $signature`

curl --silent \
 --data-urlencode grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer \
 --data-urlencode assertion=$bigstring \
 $LOGINURL/services/oauth2/token \
 -o login.answer

Comments on jwlogin

  • LOGINURL is the from the community's Administrative Settings tab.
  • CLIENTID is the connected app's "Consumer Key."
  • The big trick in the script above is step 6, the signing.  The token must be hashed and signed with a single openssl command.

Using the token in subsequent cURL commands

A successful response the authentication request will resemble:
{"scope":"id full custom_permissions api visualforce web openid chatter_api","instance_url":"https://allyservicingcrm--gagne2.cs7.my.salesforce.com","sfdc_community_url":"https://gagne2-gagne2.cs7.force.com/customers","token_type":"Bearer","sfdc_community_id":"0DBM00000008OTXOA2","access_token":"00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi"}
It makes more sense if I pretty-print it.
{
    "scope": "id full custom_permissions api visualforce web openid chatter_api",
    "instance_url": "https:\/\/mydomain--sandboxname.cs7.my.salesforce.com",
    "sfdc_community_url": "https:\/\/communityname-sandboxname.cs7.force.com\/customers",
    "token_type": "Bearer",
    "sfdc_community_id": "0DBM00000008OTXOA2",
    "access_token":  "00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi"
}
The two important pieces of information above are the instance_url and the access_token.

The best way to describe how to use this information is to show you a subsequent curl command before substitution, and after.

Before (source)

DrozBook:scripts tgagne$ cat rerun
#!/bin/bash

INSTANCE=`sed -e 's/^.*"instance_url":"\([^"]*\)".*$/\1/' login.answer`
TOKEN=`sed -e 's/^.*"access_token":"\([^"]*\)".*$/\1/' login.answer`
set -x
curl \
    --silent \
    -H "Authorization: Bearer $TOKEN" \
    "$INSTANCE/services/apexrest/$1"

After

DrozBook:scripts tgagne$ rerun SMInquiry/xyzzy
+ curl --silent -H 'Authorization: Bearer 00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi' https://mydomain--sandboxname.cs7.my.salesforce.com/services/apexrest/SMInquiry/xyzzy


Sunday, December 8, 2013

Simple dependency management for dependent Salesforce objects

Introduction

Salesforce programmers know it is sometime difficult to save multiple objects with dependencies on each other in the right order and with as little effort possible.  The "Collecting Parameter" pattern is an easy way to do this and this article will show you how to use it in your own code.

Unit of Work

In June 2013, FinancialForce's CTO, Andrew Facett, wrote his Unit Of Work article, explaining how a dependency mechanism might be implemented to simplify the saving of multiple objects with dependencies between them.

The problem is a common one for Salesforce programmers--the need to create master and detail objects simultaneously.  Programmers must save the master objects first before their IDs can be set in the detail objects.

An example might be an invoice and its line-items.  To save any InvoiceLine__c, its master object, an Invoice__c, must be saved first.

To solve this problem, Xede uses a pattern popularized by Kent Beck in his 1995 book, Smalltalk Best Practice Patterns, called Collecting Parameter.  For those unfamiliar with the Smalltalk programming language, it can be briefly described as the first object-oriented language where everything is an object.  Numbers, messages, classes, stack frames--everything.  In 1980, (nearly 34 years ago) it also supported continuable exceptions and lambda expressions.    Lest I gush too much about it, I'll say only that nearly all object oriented languages owe their best features to Smalltalk and their worse features to either trying to improve on it or ignoring prior art.

Returning to the subject at-hand, dependency saves, Xede will have created two classes to wrap the objects; Invoice and InvoiceLine.  Each instance of Invoice will aggregate within it the InvoiceLine instances belonging to it.

The code might look something like this.

// create an invoice and add some lines to it
Invoice anInvoice = new Invoice(anInvoiceNumber, anInvoiceDate, aCustomer);
...
// adding details is relatively simple
anInvoice.add(anInvoiceLine);
anInvoice.add(anotherInvoiceLine);
anInvoice.save();

So now our Invoice has two detail instances inside it.  Keeping true to the OO principles of data-hiding and loose-coupling, we can safely ignore how these instances store their sobject variables;  Invoice's Invoice__c and InvoiceLine's InvoiceLine__c.  But without knowing how they store their sobjects, how can we save the master and the detail records with the minimum of two DMLs, one to save the master and another to save the details?

We do it using a collecting parameter.

Collecting Parameter

A collecting parameter is basically a collection of like-things that cooperating classes add to.  Imagine a basket that might get passed to attendees at a charity event.  Each person receiving the collection basket may or may not add cash or checks to it.  In both programming and charity fundraisers it is better manners to let each person add to the basket themselves than to have an usher reach into strangers' pockets and remove cash.  The latter should be regarded as criminal--if not at charity events then in programming.

For programmer's such a thing violates data-hiding; not all classes keep their wallets in the same pocket (variable), some may use money clips rather than wallets, some use purses (collection types), some may have cash while others have checks or coin.  Writing code that will rummage through each class' data looking for cash is nearly impossible--even with reflection.  In the end they all get deposited into a bank account.

Let's first look at the saveTo() methods of Invoice and InvoiceLine.  They are the simplest.

public with sharing class Invoice {
    public getId() { return sobjectData.id; }

    public override void saveTo(list<sobject> aList, list<XedeObject> dependentList)
    {
        aList.add(sobjectData);

        for (InvoiceLine each : lines)
            saveTo(aList, dependentList);
    }

    Invoice__c sobjectData;
    list<InvoiceLine> lines;
}

Invoice knows where it keeps its own reference to Invoice__c (cohesion), so when it comes time to save it simply adds its sobject to the list of sobjects to be saved.  After that, it also knows where it keeps its own list of invoice lines and so calls saveTo() on each of them.

public with sharing class InvoiceLine {
    public override void saveTo(list dependentList) {
        if (sobjectData.parent__c != null)  // if I already have my parent's id I can be saved
            aList.add(sobjectData);

        else if (parent.getId() != null) {  // else if my parent has an id, copy it and I can be saved
            sobjectData.parent__c = parent.getId();
            aList.add(sobjectData);
        }

        else
            dependentList.add(this); // I can't be saved until my parent is saved
    }

    Invoice parent;
    InvoiceLine__c sobjectData;
}

InvoiceLine's implementation is nearly as simple as Invoice's, but subtly different.

Basically, if the InvoiceLine already has its parent's id or can get it's parent's id, then it adds its sobject data to the list to be saved.  If it doesn't have its parent's id then it must wait its turn, and adds itself to the dependent list.

Reader's may wonder why Invoice doesn't decide for itself whether to save its children.  Invoice could skip sending saveTo() to its children if it doesn't have an id, but whether or not its children should be saved is not its decision--it's theirs.  They may have other criteria that must be met before they can be saved.  They may have two master relationships and are waiting for them both.  They may have rows to delete before they can be saved, or may have detail records of their own with other critiera independent of whether Invoice has an id or not.  Whatever the reason may be, the rule is each objects should decide for itself whether it's ready to save, just as it's each person's decisions whether and how much money to put into the collection basket.

In our example below, save() passes two collection baskets; one collects sobjects and another collections instances of classes whose sobjects aren't ready for saving--yet.  save() loops over both lists until they're empty, and in this way is able to handle arbitrary levels of dependencies with the minimum number of DML statements.

Let's look at the base class' (XedeObject) implementation of save().

public virtual class XedeObject {
    public virtual void save() {
        list objectList = new list();
        list dependentList = new list { this };

        do {
            List aList = new List();
            List updateList = new List();
            List insertList = new List();

            objectList = new list(dependentList);
            dependentList.clear();

            for (XedeObject each: objectList)
                each.saveTo(aList, dependentList);

            for (sobject each : aList) {
                if (each.id == null)
                    insertList.add(each);
                else
                    updateList.add(each);
            }

            try {
                update updateList;
                insert insertList;
            } catch (DMLException dmlex)  {
                XedeException.Raise('Error adding or updating object : {0}', dmlex.getMessage());
            }
        } while (dependentList.isEmpty() == false);
    }

    public virtual void saveTo(list anSobjectList, list aDependentList)
    {
        subclassMethodError();
    }
}

To understand how this code works you need to be familiar with subclassing.  Essentially, the classes Invoice and InvoiceLine are both subclasses of XedeObject.  This means they inherit all the functionality of XedeObject.  Though neither Invoice or InvoiceLine implement save(), they will both understand the message because they've inherited its implementation from XedeObject.

The best way to understand what save() does is to walk through "anInvoice.save()."

anInvoice.save() executes XedeObject's save() method because Invoice doesn't have one of its own (remember it's a subclass of XedeObject).  save() begins by adding its own instance to dependentList.  Then it loops over the dependent list, sending saveTo() to each instance, and collecting new dependent objects in the dependent list.

After collecting all the objects it either updates or saves them, then returns to the top of the list of the dependent list isn't empty and restarts the process.

When the dependent list is empty there's nothing else to do and the methods falls off the bottom returning to the caller.

XedeObject also implements saveTo(), but its implementation throws an exception.  XedeObject's subclasses ought to implement saveTo() themselves if they intend to participate in the dependency saves.  If they don't or won't, there's no need to override saveTo().

One of our recent projects was a loan servicing system.  Each loan could have multiple obligors, and each obligor could have multiple addresses.  The system could be given multiple loans at a time to create, and with each batch of loans a log record was recorded.  We had an apiResponse object with a list of loans.  When we called anApiResponse.save(), it's saveTo() sent saveTo() to each of its loans, each loan sent saveTo() to each of its obligors, and each obligor() sent saveTo() to each of its addresses, before apiResponse sent saveTo() to its log class.

In the end, ApiResponse saved the loans, obligors, addresses, and log records with three DML statements--all without anything much more complicated than each class implementing saveTo().

Some programmers may argue that interfaces might have accomplished the same feat without subclassing, but in this case it is not true.  Interfaces don't provide method implementation.  Had we used interfaces then every object would be required to implement save().

Still to do

As useful as save()/saveTo() has proved to be, I can think of a few improvements I'd like to make to it.

First, I'd like to add a delete list.  Some of our operations include deletes, and rather than having each object do its own deletes I'd prefer to collect them into a single list and delete them all at once.

Next, the exception handling around the update and insert should be improved.  DmlException has lots of valuable information we could log or include in our own exception.

Third, I would love to map the DML exceptions with the objects that added them to the list.  save() could then collect all the DML exceptions and send them to the objects responsible for adding them to the list.

Coming up

  • XedeObject has implemented other useful methods we tend to use in multiple of our projects.  Implementing them once in XedeObject and deploying it to each of our customer's orgs saves time, money, and improves consistency across all our projects.  One of these is coalesce().  There are many others.
  • Curl scripts for exercising Salesforce REST services.
  • Using staticresources as a source for unit-test data.

Friday, March 22, 2013

A better way to generate XML on Salesforce using VisualForce

There are easier ways to generate XML on Salesforce than either the Dom library or XmlStreamWriter class.  If you've done either, perhaps you'll recognize the code below.

public static void DomExample()
{
    Dom.Document doc = new Dom.Document();
    
    Dom.Xmlnode rootNode = doc.createRootElement('response', null, null);

    list accountList = [ 
        select  id, name, 
                (select id, name, email from Contacts) 
          from  Account 
    ];
          
    for (Account eachAccount : accountList) {
        Dom.Xmlnode accountNode = rootNode.addChildElement('Account', null, null);
        accountNode.setAttribute('id', eachAccount.Id);
        accountNode.setAttribute('name', eachAccount.Name);
        
        for (Contact eachContact : eachAccount.Contacts) {
            Dom.Xmlnode contactNode = accountNode.addChildElement('Contact', null, null);
            contactNode.setAttribute('id', eachContact.Id);ac
            contactNode.setAttribute('name', eachContact.Name);
            contactNode.setAttribute('email', eachContact.Email);
        }
    }
    
    system.debug(doc.toXmlString());            
}

Or maybe this example.

public static void StreamExample()
{
    XmlStreamWriter writer = new XmlStreamWriter();
    
    writer.writeStartDocument('utf-8', '1.0');        
    writer.writeStartElement(null, 'response', null);
    
    list accountList = [ 
        select  id, name, 

                (select id, name, email from Contacts) 
          from  Account 
    ];
          
    for (Account eachAccount : accountList) {
        writer.writeStartElement(null, 'Account', null);
        writer.writeAttribute(null, null, 'id', eachAccount.Id);
        writer.writeAttribute(null, null, 'name', eachAccount.Name);        

        for (Contact eachContact : eachAccount.Contacts) {
            writer.writeStartElement(null, 'Contact', null);
            
            writer.writeAttribute(null, null, 'id', eachContact.Id);
            writer.writeAttribute(null, null, 'name', eachContact.Name);
            writer.writeAttribute(null, null, 'email', eachContact.Email);
            
            writer.writeEndElement();
        }
        
        writer.writeEndElement();
    }
    
    writer.writeEndElement();
    
    system.debug(writer.getXmlString());
    
    writer.close();            
}

But wouldn't you rather write something like this?

public static void PageExample()
{
    PageReference aPage = Page.AccountContactsXML;
    aPage.setRedirect(true);
    system.debug(aPage.getContent().toString());
}

Let's take a look at what makes creating the XML possible with so few lines of Apex.

Rather than build our XML using Apex code, we can type it directly into a Visualforce page--providing we strip all VF's page accessories off using apex:page attributes.


<apex:page StandardController="Account" recordSetVar="Accounts" contentType="text/xml" showHeader="false" sidebar="false" cache="false">
<?xml version="1.0" encoding="UTF-8" ?>
<response>
<apex:repeat value="{!Accounts}" var="eachAccount" >
    <Account id="{!eachAccount.id}" name="{!eachAccount.name}">&
    <apex:repeat value="{!eachAccount.contacts}" var="eachContact">
        <Contact id="{!eachContact.id}" name="{!eachContact.name}" email="{!eachContact.email}"/>
    </apex:repeat>
    </Account>
</apex:repeat>
</response>
</apex:page>

The secret that makes this code work is setting the page's API version 19.0 inside its metadata.  That is the only thing that allows the <?xml ?> processing instruction to appear at the top without the Visualforce compiler throwing Conniptions (a subclass of Exception). 

Depending on how much XML you need to generate, another advantage to the VisualForce version is how few script statements are required to produce it.

Number of code statements: 4 out of 200000

Our Dom and Stream examples require 28 and 37 respectively--and that's in a developer org with only three accounts and three contacts.  Additionally, the Page example is only 18 lines including both the .page and .cls, whereas the Dom and Stream examples are 27 and 38 lines respectively (coincidence?).

But what happens when we add billing and shipping addresses (and two more contacts)?

Our page example's Apex code doesn't change, but its page does.

<apex:page StandardController="Account" recordSetVar="Accounts" contentType="text/xml" showHeader="false" sidebar="false" cache="false">
<?xml version="1.0" encoding="UTF-8" ?>
<response>
<apex:repeat value="{!Accounts}" var="eachAccount" >    
    <Account id="{!eachAccount.id}" name="{!eachAccount.name}">
        <apex:outputPanel rendered="{!!IsBlank(eachAccount.billingStreet)}" layout="none">
            <Address type="Billing">
                <Street>{!eachAccount.billingStreet}</Street>
                <City>{!eachAccount.billingCity}</City>
                <State>{!eachAccount.billingState}</State>
                <PostalCode>{!eachAccount.billingPostalCode}</PostalCode>
                <Country>{!eachAccount.billingCountry}</Country>
            </Address>        
        </apex:outputPanel>        
        <apex:outputPanel rendered="{!!IsBlank(eachAccount.shippingStreet)}" layout="none">            
            <Address type="Shipping">
                <Street>{!eachAccount.shippingStreet}</Street>
                <City>{!eachAccount.shippingCity}</City>
                <State>{!eachAccount.shippingState}</State>
                <PostalCode>{!eachAccount.shippingPostalCode}</PostalCode>
                <Country>{!eachAccount.shippingCountry}</Country>
            </Address>
        </apex:outputPanel>
        <apex:repeat value="{!eachAccount.contacts}" var="eachContact">&
            <Contact id="{!eachContact.id}" name="{!eachContact.name}" email="{!eachContact.email}"/>
        </apex:repeat>
    </Account>
</apex:repeat>
</response>
</apex:page>

We've added sections for both the billing and shipping codes, with conditional rendering in-case either doesn't exist.  In addition to our six lines of Apex (PageExample() above) we've added 12 new lines to the earlier 18 for a total of 36 lines.  The best part is, even with the extra XML being generated our Page example will still only consume 4 script statements of the already-insufficient 200,000.

How do our Dom and Stream examples fair?  Both are pasted together below into a single code section.

public static void DomExample()
{
    Dom.Document doc = new Dom.Document();        
    
    Dom.Xmlnode rootNode = doc.createRootElement('response', null, null);

    list accountList = [ 
        select    id, name, 
                billingStreet, billingCity,
                billingState, billingPostalCode,
                billingCountry,
                shippingStreet, shippingCity,
                shippingState, shippingPostalCode,
                shippingCountry,
                (select id, name, email from Contacts) 
          from    Account ];
          
    for (Account eachAccount : accountList) {
        Dom.Xmlnode accountNode = rootNode.addChildElement('Account', null, null);
        accountNode.setAttribute('id', eachAccount.Id);
        accountNode.setAttribute('name', eachAccount.Name);
        
        if (String.IsNotBlank(eachAccount.billingStreet)) {
            Dom.Xmlnode addressNode = accountNode.addChildElement('Address', null, null);
            addressNode.setAttribute('type', 'Billing');
            addressNode.addChildElement('Street', null, null).addTextNode(eachAccount.billingStreet);
            addressNode.addChildElement('City', null, null).addTextNode(eachAccount.billingCity);
            addressNode.addChildElement('State', null, null).addTextNode(eachAccount.billingState);
            addressNode.addChildElement('PostalCode', null, null).addTextNode(eachAccount.billingPostalCode);
            addressNode.addChildElement('Country', null, null).addTextNode(eachAccount.billingCountry);                
        }
        
        if (String.IsNotBlank(eachAccount.ShippingStreet)) {                
            Dom.Xmlnode addressNode = accountNode.addChildElement('Address', null, null);
            addressNode.setAttribute('type', 'Shipping');
            addressNode.addChildElement('Street', null, null).addTextNode(eachAccount.shippingStreet);
            addressNode.addChildElement('City', null, null).addTextNode(eachAccount.shippingCity);
            addressNode.addChildElement('State', null, null).addTextNode(eachAccount.shippingState);
            addressNode.addChildElement('PostalCode', null, null).addTextNode(eachAccount.shippingPostalCode);
            addressNode.addChildElement('Country', null, null).addTextNode(eachAccount.shippingCountry);                
        }
        
        for (Contact eachContact : eachAccount.Contacts) {
            Dom.Xmlnode contactNode = accountNode.addChildElement('Contact', null, null);
            contactNode.setAttribute('id', eachContact.Id);
            contactNode.setAttribute('name', eachContact.Name);
            contactNode.setAttribute('email', eachContact.Email);
        }
    }
    
    system.debug(doc.toXmlString());            
}

public static void StreamExample()
{
    XmlStreamWriter writer = new XmlStreamWriter();
    
    writer.writeStartDocument('utf-8', '1.0');        
    writer.writeStartElement(null, 'response', null);
    
    list accountList = [ 
        select    id, name, 
                billingStreet, billingCity,
                billingState, billingPostalCode,
                billingCountry,
                shippingStreet, shippingCity,
                shippingState, shippingPostalCode,
                shippingCountry,
                (select id, name, email from Contacts) 
          from    Account ];
          
    for (Account eachAccount : accountList) {
        writer.writeStartElement(null, 'Account', null);
        writer.writeAttribute(null, null, 'id', eachAccount.Id);
        writer.writeAttribute(null, null, 'name', eachAccount.Name);
        
        if (String.IsNotBlank(eachAccount.billingStreet)) {
            writer.writeStartElement(null, 'Address', null);
            writer.writeAttribute(null, null, 'type', 'Billing');                
            
            writer.writeStartElement(null, 'Street', null);
            writer.writeCharacters(eachAccount.billingStreet);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'City', null);
            writer.writeCharacters(eachAccount.billingCity);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'State', null);
            writer.writeCharacters(eachAccount.billingState);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'PostalCode', null);
            writer.writeCharacters(eachAccount.billingPostalCode);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'Country', null);
            writer.writeCharacters(eachAccount.billingCountry);
            writer.writeEndElement();

            writer.writeEndElement();                
        }
        
        if (String.IsNotBlank(eachAccount.shippingStreet)) {
            writer.writeStartElement(null, 'Address', null);
            writer.writeAttribute(null, null, 'type', 'Shipping');                
            
            writer.writeStartElement(null, 'Street', null);
            writer.writeCharacters(eachAccount.shippingStreet);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'City', null);
            writer.writeCharacters(eachAccount.shippingCity);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'State', null);
            writer.writeCharacters(eachAccount.shippingState);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'PostalCode', null);
            writer.writeCharacters(eachAccount.shippingPostalCode);
            writer.writeEndElement();
            
            writer.writeStartElement(null, 'Country', null);
            writer.writeCharacters(eachAccount.shippingCountry);
            writer.writeEndElement();

            writer.writeEndElement();                
        }

        for (Contact eachContact : eachAccount.Contacts) {
            writer.writeStartElement(null, 'Contact', null);
            
            writer.writeAttribute(null, null, 'id', eachContact.Id);
            writer.writeAttribute(null, null, 'name', eachContact.Name);
            writer.writeAttribute(null, null, 'email', eachContact.Email);
            
            writer.writeEndElement();
        }
        
        writer.writeEndElement();
    }
    
    writer.writeEndElement();
    
    system.debug(writer.getXmlString());
    
    writer.close();            
}

Our Dom example is 52 lines and takes 60 script statements and our Stream example has ballooned to 96 lines and takes 104 script statements on our tiny data set.  For anyone keeping track, PageExample() has 30% fewer lines than DomExample() and 63% fewer lines than StreamExample().  Most importantly, no matter how much data is involved, PageExample will only ever use 4 script statements while the other two will scale gometrically as each new row of data requires more than one script statement to generate.

Caveats and disclaimers

  • The page above is about as basic as I could come up.  It stands alone and requires no controllers.  Readers should be able to paste it directly into their development orgs and see what they get (dont' forget to set the API to 19.0).
  • So basic a page doesn't take into account ordering of the data.  If the XML data needs to be in a specific order a controller would be required to return that list back to the page using a SOQL "order by" clause.
  • Though this technique is great for generating XML it can't consume XML.  That's probably obvious to programmers but is important to point-out for management types that may visit.
  • XSL stylesheets can be easily reference from the XML page by simply adding <?xml-stylesheet type="text/xsl" href="..."?> after the <?xml ?> instruction.  Such a thing can be done with PageExample() and StreamExample(), but the Dom classes don't allow adding processing instructions that I know of.
  • It's impossible to use getContent() inside test methods. 

Note: This article was originally published March 22, 2013 at it.toolbox.com on Anything Worth Doing.

Tuesday, April 13, 2010

Inside-out design : Parts I and II



The topic of bottom-up vs. top-down design has accumulated a lot of baggage since both descriptions of system design were first introduced (1970s). Both are perhaps well understood, or many may assume they understand both. This series of articles introduces the terms inside-out and outside-in to help readers visualize a three-dimensional design (an onion’s layers would be a good example) rather than a two-dimensional design similar to tree rings.

Business software is discovered, not invented. Arguments that computer technology has fundamentally changed business, or even invented it, are exaggerated. The business of banking remains much the same as it was 150 years ago, deposits and loans. Insurance remains much the same, pay smaller amounts now for the promise to cover expenses later. Retailing, logistics, and drafting are also mostly unchanged.

If computers haven’t invented these businesses what can we truthfully assert they have done? We can assert that they’ve helped make humans, both individually and collectively, super human. The way in which software has incrementally accomplished this feat can be described as from the inside-out. This article will elaborate on what inside-out design is, use it as a model for how new software projects should be designed and developed, and describe how inside-out design (IOD) avoids the many shortcomings of alternative approaches.

“We can make him better than he was before. Better, stronger, faster.” 
Introduction to The Six Million Dollar Man

Though banking may be a complicated business, its basic activities are simple. Customers “deposit” money at the bank and are paid interest. Banks pay a lower interest rate on deposits than they earn lending money to other customers.

Perfect Memory

Our first step at creating a super-human banker is to improve their memory—regardless of age. How many customers, account balances, and interest rates for each can a human remember perfectly on their own? Whatever that number is a banker that can remember 100 times more will be more profitable. The banker that can remember 100 times more than that even more profitable. A banker with perfect memory is limited only by his productivity and efficiency—but we’ll address that later.

Perfect memory is what databases provide the banker. A database is capable of remembering, perfectly, the name of every customer, their address, phone number, accounts, account balances, transaction history, and even their relationships to other customers and their accounts.

This is the core of our inside-out design. The business already existed—all we did was discover and record the banking schema into a database.

If nothing else is done, our banker may be better off than they were before. Without any additional features the possibilities are nearly endless. Anything that can be stored in the database can be done so perfectly. Any number or type of account and any number or type of transaction can be perfectly stored and perfectly retrieved.

Much more can be written of the benefits of relational databases, and indeed much already has. Not the least of which include RDB’s basis in relational set theory, referential integrity, and normalization.

But even with mathematically provably correct data, perfect memory can still be tarnished with imperfect manipulation. The next layer will enhance the first with perfect execution.

Perfect Execution

With perfect memory our banker will never forget your name or account balance. They simply record each of your transactions in their database.
If this were a relational database our banker could use SQL. Using SQL they can find your account number using your name or phone number:

SELECT @ACCOUNT_NUMBER = ACOUNT_NUMBER
 FROM CUSTOMER
 JOIN ACCOUNT 
   ON ACCOUNT.OWNER_KEY = CUSTOMER.CUSTOMER_KEY

WHERE CUSTOMER.PHONE_NUMBER = "248 555 2960"

Once they have your account number they can enter the transaction

INSERT INTO TRAN_HISTORY (ACCOUNT_NUMBER, TRANSACTION , AMOUNT)
VALUES (@ACCOUNT_NUMBER, “DEPOSIT”, $100.00)


Depending on how “perfect” their database is, and how many accounts the customer has, or whether they recently bounced a check and must pay an NSF fee, or how accounts feed the general ledger, more SQL will likely be required to keep everything “perfect.”

So even though the banker can remember perfectly what was done they have difficulty remembering how to do it.

Most contemporary relational databases provide a mechanism for building SQL macros or functions called stored procedures. Stored procedures extend the syntax of SQL and a mechanism for storing the function inside the database itself. In this manner an RDB may hide the details of its schema as much for its own benefit as our banker’s. Additionally, invoking stored procedures is simpler than typing all the SQL each time, making it easier for more bankers to use the database even if they must still learn some syntax.

If SQL is the lowest-level language for manipulating relational database tables, or 1st generation language, stored procedures can be thought of as a less low-level or 2nd generation language. Using stored procedures are example above may be simplified.

EXEC ACCOUNT_DEPOSIT(“248 555 2960”, $100.00)

How ACCOUNT_DEPOSIT is implemented is hidden both by virtue and necessity. By virtue because bankers don’t have to remember all the details of an account deposit, and by necessity because such an interface is required to provide perfect execution—the database is always updated consistently no matter who invokes the procedure. Additionally, the procedure is free to change its implementation without affecting bankers as long as the order, type, and number of the procedure’s arguments are unchanged.

The reasons for the procedure’s change are also hidden from the procedure’s users. Its implementation may have changed because of new features or schema change. Regardless the reason, the procedure’s consumers benefit by its improved implementation without needing to change what they already know and the processes they’ve already documented.

It’s worth noting that an RDB that provides stored procedures is very much like an object in a traditional object-oriented point-of-view. Just as objects implement publicly-accessible methods to hide their implementation our banking RDB schema implements publicly-accessible procedures to hide its implementation.

Our banking database’s stored procedures define its Application Programming Interface. Any user can use the stored procedures to affect perfect transactions.

It’s important to pause here and contemplate an important inside-out feature. Any user can use the stored procedures to affect perfect transactions. One banker may be a teller, another may be an ATM, or a Point-of-Service terminal, or still another may be a web page.

Even though our implementation requires applications (tellers, ATMs, POSs, etc.) have access to our database, no other technical hurdle is erected. Any programming language that provides a library to access our RDB is capable of executing perfect transactions. In this sense, the surface area of our system has been increased. We’ve simultaneously improved our system’s integrity while increasing its utility to other languages and applications.

Outside-in designs may approach this differently. It is too commonplace for applications to be designed from the outside-in—designing the user interface first and the supporting infrastructure afterwards. The result, though possibly to the user’s liking, is only as capable as it will ever be. It has only a single interface and its supporting mechanisms implement only that interface's required features. It has little surface area.

So now our banker has perfect memory and perfect execution. In the next article we’ll explore inside-out’s next super-human enhancement—ubiquity.

Tuesday, December 2, 2008

If it's not in Bugzilla, it doesn't exist



There are many ways to manage projects. Just because I understand time estimates are important doesn't mean I have to like or believe them.

An alternative to time-lines and resources estimates is to manage development, enhancements and fixes with little more than a defect tracking system. At InStream we used Bugzilla.

Using Bugzilla or any defect tracking tool as a substitute for project management software may not work for everybody, but it worked well for us. Below I'll describe why and how we used it.

As the development team at InStream grew larger and end-user requests more frequent, we did what most companies do--create a technology steering committee to track and prioritize enhancements and fixes to more closely track the priorities of our business. We had a board with 3x5 cards on it we filled out with each request, we put them into buckets on the board describing what might be done one-week out, two-weeks out, and included a when-we-get-to-it-we'll-get-to-it category.

The committee consisted of the COO, the CTO (myself), the development staff, the QA manager, the CCO (chief credit officer), and some of our end-users.

A project manager was appointed and their job was to organize the cards after our meetings into a software package to track the requests, the progress on them, and prepare for the next meeting with updates for the entire committee.

A funny thing happened over the next few weeks. It turns out our development staff was so quick at implementing features and fixing bugs that the steering committee wasn't unable to keep up with the progress. More time was spent trying to keep "The Project" updated and current than was required to enhance the software.

The developers had recently started using Bugzilla to organize themselves and give myself insight into what they were doing during the day. We were using it so well, in fact, we proposed banning the committee in favor of relying on Bugzilla--with a few usage guidelines.

Rule Number One

Whether it was a bug, feature request, or fix, I had a simple rule for all our users and developers: If it's not in Bugzilla it doesn't exist.

For end-users it meant that everything they wanted the system to do, or anything they thought needed fixing, or anything they thought could look better or perform faster had to be entered into the system--by them.

User's couldn't complain about a bug they hadn't reported. They couldn't be waiting for a feature they hadn't asked for. By entering the bug themselves users took ownership of the bug's reporting, its description, and ultimately (and this is important) its closing. A bug wasn't closed until the user confirmed it in production.

A side-benefit of using Bugzilla is it also became our working requirements tool. Users would describe what they thought they needed, developers would ask questions about it, users would clarify, developers would confirm, and the end result was a complete audit trail of a design requirement, followed from definition, implementation, deployment, to end-user acceptance.

Does your project management software do that?

For developers it meant they didn't work on anything that didn't exist in Bugzilla even if they had to enter it themselves.

One of the benefits of a defect tracking system over project management is the ability to create tasks (incidents, bugs, items, whatever you want to call them) to document what it is your developers do all day. Bugzilla was then able to report who opened items, who worked on them, who checked-in the fixes, and when the items were resolved.

As a manager I discovered it more valuable to monitor the velocity of my staff's productivity than the time they spent being productive. As the system's original developer (but kicked-out of coding by my staff) I discovered I could use Bugzilla as a way to program through my staff, except instead of writing Smalltalk or PHP I only needed to describe what I wanted it to do and it would find its way into the code base.

Making Bugzilla easy for end-users is relieving them of having to answer all the questions Bugzilla asks. We agreed that end-users were only responsible for the description and prioritizing requests so engineering had an idea how important it was to them.

Each new bug would go through triage, usually by a developer. It was the developer's responsibility to figure out which product the bug related to, which category, and what the bug's severity was.

And because Bugzilla copies bug owners on everything that happens to their requests, our end-users never had to ask if something was being worked on or what its status was. They received email updates every time a bug's status changed and learned to get excited when the saw the CVS COMMIT messages recorded to their requests.

Engineering and QA shared the responsibility of determining which fixes would be included in which releases. We delivered to production both hot fixes and releases.

Hot fixes consisted of bug fixes and enhancements with minimal or isolated impact to the database, that could be moved into production with few or no side effects. Hot fixes could occur daily, and it was not unusual for cosmetic or low-impact bugs to be corrected same-day.

Full Releases were reserved for database changes impacted either many systems or our posting programs. Since protecting the database was our production rule #1 we were careful that database changes and the posting programs were well tested before releasing them into production.

Thursday, May 1, 2008

The next big thing

Joel Spolsky is the president of Fogg Creek Software and frequent commentator on the software development industry. His latest article, Architecture Astronauts, criticizes Microsoft's continued re-invention of something no one seems to want. 

Read Joel's article to get the full comic affect, but here's a pertinent excerpt: 
When did the first sync web sites start coming out? 1999? There were a million versions. xdrive, mydrive, idrive, youdrive, wealldrive for ice cream. Nobody cared then and nobody cares now, because synchronizing files is just not a killer application. I'm sorry. It seems like it should be. But it's not.
A killer application would certainly be the next big thing. If you're unsure what a killer application is think of the first word processor, spreadsheet, or database program. Some of you may not appreciate the impact a killer-application can have on the world because the last killer-application was Tim Berner Lee's introduction of the World Wide Web in 1991--17 years ago! 

As it relates to "the next big thing" or what users really want, after reading Joel's essay two things popped into my mind immediately. The first is my frustration with needing a different user ID for every website that requires registration. As if to add insult to my injury when I went to make a comment on Joel's essay on Reddit I had to create Yet Another Account Profile (YAAP). I was reminded of the next while reading other users' comments I noticed how poorly discussion forums are implemented as web applications. 

There are many companies and portals that pretend to provide single sign-on. The idea being that users create a single account including user ID and password and are automatically credentialed for multiple applications across the internet. The problem I see with the current approach is two-fold. First, I don't trust many companies with being the guardians of my "official" profile due to my suspicion of their ulterior motives. Will be profile information be sold? Will it be harvested by advertising companies? What will the company or their "partners" do with the information about other sites I authenticate to using their credentials? 

Microsoft Passport wanted to be a single-sign-on for the internet, but Microsoft had already demonstrated their contempt for users making it so difficult to verify the authenticity of my Windows license when simply upgrading my computer--much less throwing it out and replacing it with a new one. Even Microsoft seems to have admitted Passport's reputation by dropping it. Of course, not willing to let go control completely they re-invented it as Windows Live

Do you really want to trust Microsoft with your profile after their Orwellian Windows Genuine Advantage patch? 

There are entities I might be willing to trust. First is the US Post Office. We already trust them to deliver our mail, first class and bulk, desirable or not, and best of all--everything is brought to my door-step by a uniformed representative of the United States Government. 

Perhaps out of necessity, I also trust my bank. Even if it is out of necessity, my credit union hasn't given me cause to believe they want to own me. Instead, my credit union (and bank before that) actually trust me with their money for my credit card, car loan, mortgage, and home equity LOC. 

It's a place to start, anyway. OK, two places to start. 

I'll discuss the next thing in the next article, I'm thinking of calling "The next big thing should stop ruining the last good thing."
Follow @TomGagne