Wednesday, September 16, 2015

Professor Strunk's 1919 advice to developers in 2016

Before computers existed, English professor William Strunk gave sage advice on how to develop software.
"Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts."
Frankly, it was the "...and a machine no unnecessary parts," I thought especially relevant.

While preparing for a presentation at Dreamforce 15 I was reminded of this advice while considering a slide recommending Salesforce developers clean-up their change sets and deployments before moving them into production.

Honestly, during the development of even a single method or subroutine it's possible for there to be code that may have had meaning in the method's early development, but after multiple iterations has been orphaned or otherwise has no impact.  It is the developer's responsibility to identify and eliminate unnecessary code and unused variables, just as in a class unused methods should be eliminated (no matter our affection for them).  Why would it be such a surprise that unused classes, pages, components, reports, report folders, flows, images, etc. shouldn't also be eliminated before being immortalized in a production system?

"...and a machine no unnecessary parts."

Some integrated development environments (IDEs) have code browsers that are able to identify unreachable code.  I'm not yet aware of one for Salesforce development, but if you know of one please share it.

Until then, it is the developer's responsibility to eliminate code and other artifacts from their projects and repositories--remembering to pay as much attention to their static resources, permission sets, profiles, and other configuration elements as to Visualforce and Apex.

Salesforce haystacks are large enough without the code and components that do nothing getting in the way of maintenance, documentation, code coverage, and ultimately process improvement.

Thursday, September 3, 2015

Step-by-step easy JSRemoting with Angular

I've been doing a lot of reading the last week or so learning how to mix AngularJS with Visualforce.  I've watched videos, read articles, read documentation, but none of them were simple.  It's as though developer's couldn't resist showing-off something else, and that something else buried the simplicity of simple JSRemoting calls inside Visualforce.

All I wanted to do is call some already-existing Remote Methods from inside an Angular page, and try to make sure it played nice with the other Angular features, like "promises."

We're going to start with a simple Apex controller with two methods.  The first, Divide(), simply divides its first argument by its second and returns the result.  As simple as it is it will be valuable later when we test our Angular Javascript to see how exceptions are handled--all we need to do is pass 0 for the second argument to see how exceptions behave.

The second method, Xyzzy(), simply returns a string.  All remote and REST classes should have some simple methods that do very little to simplify testing.

global class TomTestController {
  
    @RemoteAction
    global static Double Divide(double n, double d) {
        return n / d;
    }

    @RemoteAction
    global static String Xyzzy() {
        return 'Nothing happens.';
    }
}

After saving that class in your org create a new page (mine's called TomTest.page) with the simple contents below.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" controller="TomTestController">
    <apex:includeScript value="//ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js" />
    
    <div ng-app="myApp" ng-controller="myCtrl">
        <p>Hello, world!</p>
    </div>

    <script type="text/javascript">

        var app = angular.module('myApp', [ ]);
        app.controller('myCtrl', function($scope, $q) {
            
        });
        
    </script>

</apex:page>

The page above output the obligatory "Hello, world!" but functionally does nothing Angular-ish, short of defining an app and giving it a controller.  You should make certain the page does very little by inspecting the page from your browser to see what's written out to the console.  Knowing what "nothing" looks like is the first step to recognizing when "something" happens and you know whether it was something you intended or not.

The best thing about the page above is it doesn't include anything that distracts from our purpose.  There are no stylesheets to wonder whether they're needed and no other Javascript library you may think are required to get a simple example working.

The next thing we're going to do is add our Divide() method. But before we drop it into the Javascript let's look at what it normally looks like inside our non-Angular Javascript.

TomTestController.Divide(1, 1, function(result, event) {
    if (event.status)
        console.log('It worked!');
    else
        console.log('It failed!);
});

This is about as simple as JSRemoting code goes.  The browser is going to call the Divide() method on the TomTestController class and passes the numbers 1 and 1.  When the callout finishes event.status will tell us it worked (true) or failed (false).

In fact, we can put that call into our Javascript right now and run it to see what happens.  Update your page so it contains:


<apex:page showHeader="false" sidebar="false" standardStylesheets="false" controller="TomTestController">
    <apex:includeScript value="//ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js" />
    
    <div ng-app="myApp" ng-controller="myCtrl">
        <p>Hello, world!</p>
    </div>

    <script type="text/javascript">

        var app = angular.module('myApp', [ ]);
        app.controller('myCtrl', function($scope, $q) {
            
        });

        TomTestController.Divide(1, 1, function(result, event) {            
            if (event.status)
                console.log('It worked!');
            else
                console.log('It failed!');
        }, {buffer: false});
        
    </script>

</apex:page>

You should have read "It worked!" in your console log.

To make our remote call work with Angular promises, we need to wrap it inside a function that Angular-izes our call with promises so developers can use the .then().then().catch() code we've been reading so much about.

function Divide(n, d) {  
    var deferred = $q.defer();
    try {
        TomTestController.Divide(n, d, function(result, event) {
            if (event.status)
                deferred.resolve(result);
            else
                deferred.reject(event);
        }, {buffer: false});
    } catch (e) {
        deferred.reject(e);
    }
    
    return deferred.promise;
}

Our callout is still recognizable, but it has a few new features.  Principally, it creates a promise and calls either deferred.resolve() or deferred.reject() depending on the call's success or failure respectively.

Once our function is defined inside Angular's controller we can call it with (1, 1) to see how it works, and how it looks when it works inside the inspector.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" controller="TomTestController">
    <apex:includeScript value="//ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js" />
    
    <div ng-app="myApp" ng-controller="myCtrl">
        <p>Hello, world!</p>
    </div>

    <script type="text/javascript">
        var app = angular.module('myApp', [ ]);
        app.controller('myCtrl', function($scope, $q) {
            
            function Divide(n, d) {  
                var deferred = $q.defer();
                try {
                    TomTestController.Divide(n, d, function(result, event) {
                        if (event.status)
                            deferred.resolve(result);
                        else
                            deferred.reject(event);
                    }, {buffer: false});
                } catch (e) {
                    deferred.reject(e);
                }
                return deferred.promise;
            }
            
            Divide(1, 1);
        });
        
    </script>

</apex:page>

I know.  When you inspected it again you couldn't tell if anything happened.  The page functioned exactly as before.

So now let's show what happens if we use one of those .then() calls.  First, change the Divide() call above to it looks like :

Divide(1, 1).then(function() { console.log('Success!'); });

Or you can write it how you may be seeing it in other Angular examples...

Divide(1, 1)
    .then(function() { console.log('Success!'); });

You should have seen the text "Success!" printed on the console.

But what if our .then() function needed the output of our Divide()?  What would that look like?

Divide(1, 1)
    .then(function(data) { console.log(data); });

Notice in the code above our anonymous function now accepts and argument (data) and prints it instead of "Success!"  When you run this version of the code you should see "1" output to the console log.

But Divide() can also fail, and that is why .then() takes two function arguments, the first is for successful returns and the second for failures.

Let's pass two functions and modify our console.log() calls so we can tell which we're getting.

Divide(1, 1)
    .then(
        function(arg) { console.log('good', arg); },
        function(arg) { console.log(' bad', arg); }
    );

You should have seen "good 1" in the console log.

But what about errors?  What happens when we get an exception?  If you haven't already tried it, change the code to Divide(1, 0).  What did you get?  I got an error warning me, "Visualforce Remoting Exception: Divide by 0" followed by "bad >Object...".  When you look at the object sent to the second anonymous function notice that it's the "event" object passed when the code called deferred.reject(event);

Now that you have JSRemoting working inside Angular with promises, now is a good time to play around with it.  Below is my addition of Xyzzy().  But sometime tomorrow I think I'll create a remote for Echo() that simply returns its argument, or maybe a quick [ select ... from something ... limit 10 ]; to see what that looks like.

Let me know how it works for you.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" controller="TomTestController">
    <apex:includeScript value="//ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js" />
    
    <div ng-app="myApp" ng-controller="myCtrl">
        <p>Hello, world!</p>
    </div>

    <script type="text/javascript">
        var app = angular.module('myApp', [ ]);
        app.controller('myCtrl', function($scope, $q) {
            
            function Divide(n, d) {  
                var deferred = $q.defer();
                try {
                    TomTestController.Divide(n, d, function(result, event) {
                        if (event.status)
                            deferred.resolve(result);
                        else
                            deferred.reject(event);
                    }, {buffer: false});
                } catch (e) {
                    deferred.reject(e);
                }
                return deferred.promise;
            }
            
            function Xyzzy() {  
                var deferred = $q.defer();
                try {
                    TomTestController.Xyzzy(function(result, event) {
                        if (event.status)
                            deferred.resolve(result);
                        else
                            deferred.reject(event);
                    }, {buffer: false});
                } catch (e) {
                    deferred.reject(e);
                }
                return deferred.promise;
            }
            
            Divide(1, 0)
                .then(function(success) { Xyzzy(); })
                .catch(function(error) { console.log('ERROR', error); });
        });
        
    </script>

</apex:page>

Monday, August 31, 2015

Javascript AS a Visualforce page

There are several reasons a developer may want or need to have a their Javascript inside a Visualforce page.  Before explaining what those reasons may be, let just look at how you go about it.

Step 1 - Create your Javascript Page

The source below comes from a page I named "TomTestJS.page"

<apex:page showHeader="false" sidebar="false" standardStylesheets="false" 
    contentType="text/javascript">
 console.log('We are here!');
 document.write('This is the Javascript');
</apex:page>

Step 2 - Include your Javascript inside another page

Use <apex:includeScript value="{!$Page.TomTestJS" /> if the Javascript needs to be loaded at the top of the page and <script src="{!$Page.TomTestJS" /> if it needs to be loaded later, perhaps after some content has been rendered to the DOM.

The page below renders:

This is the page.
This is Javascript.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false">

<p>This is the page.</p>

<script src="{!$Page.TomTestJS}" />

</apex:page>

The page above included the Javascript below some page content, that's why the document.write() output appeared below the HTML output.

If we instead did it the more traditional way using <apex:includeScript /> at the top of the page, the output renders:

This is the Javascript.
This is the page.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false">
<apex:includeScript value="{!$Page.TomTestJS}" />
<p>This is the page.</p>

</apex:page>

I can think of a few reasons why programmers may want to do this.  Coincidentally, there the reasons I've wanted to do this.
  1. It's easier to track the source code in a repository if the files exist as independent entities and not part of a zip file.
  2. It's easier to see when a specific Javasript was last modified
  3. It allows the Visualforce preprocessor to resolve merge fields in the Javascript before being loaded into the browser (for assets that may exist in a static resource or as another page.
  4. It allows what would normally live inside <script /> tags inside a Visualforce page to exist independently, change independently, etc.
There are other reasons, too.  Today I had to port some Javascript and HTML from a media team into a sandbox and the team had taken liberties with their templates and other references that required "fixing" to work inside Salesforce.  Moving one of these Javascript files into a page and letting the preprocessor take care of a merge field to resolve the location of a static resource worked like a charm.

Thursday, August 27, 2015

How I got started with Angular and Visualforce

If you're reading this then you may be early-on in exploring AngularJS and wondering how you can get the W3Schools Angular Tutorial working inside Salesforce's Visualforce.

The tutorial's first page looks relatively straightforward, and with a simple closing tag for the <input> it will even pass Visualforce's compiler.

<!DOCTYPE html>
<html lang="en-US">
<script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<body>

    <div ng-app="">
  <p>Name : <input type="text" ng-model="name" /></p>
  <h1>Hello {{name}}</h1>
    </div>

</body>
</html>

If you tried this inside Visualforce you likely got the same output I did.   Instead of behaving like it does in the tutorial, Visualforce stubbornly displays "{{name}}."

Without delay, here's a Visualforce-ized version of the W3Schools tutorial.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false">
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js" />

<div ng-app="noop">
    <p>Input something in the input box:</p>
 
    <p>Name : <input type="text" ng-model="name" placeholder="Enter name here" /></p>
 
    <h1>Hello {{name}}</h1>
</div>

<script>
    var myAppModule = angular.module('noop', []);
</script>

</apex:page>

Visualforce requires ng-app to have a value to pass its own syntax checker.  If a value is passed to ng-app to get past Visualforce then that value is interpreted by Angular as a module used to bootstrap your page and must be defined.

In the example above I created a module called "noop" that literally does nothing but take-up space to make something else work.

Now my page behaved just like W3Schools said it should.

Having Googled around some more, I found multiple tutorials and videos introducing the neat things people have done with Visualforce and Angular, but all of them are too complicated for the absolute novice.  But the search pages did alert me that Salesforce is so geeked about the combination of Angular and Visualforce that they've created an app for the appexchange that installs angular, underbar, bootstrap, and several other JS libraries.  The app is called Angilar MP [sic].  The page gives instructions for how to install it into your org and includes some demo pages showing how to put more complicated examples together.

Since the app loads all those Javascript libraries into a staticresource we can re-write our application to look just a tad more Visualforce-like.

<apex:page showHeader="false" sidebar="false" standardStylesheets="false">
<apex:includeScript value="{!URLFOR($Resource.AngularMP, 'js/vendor/angular.1.0.6.min.js')}" />
<div ng-app="noop">
    <p>Input something in the input box:</p>
 
    <p>Name : <input type="text" ng-model="name" placeholder="Enter name here" /></p>
 
    <h1>Hello {{name}}</h1>
</div>

<script>
    var myAppModule = angular.module('noop', []);
</script>

</apex:page>

All it really does is replace the <script src="..." /> with <apex:includeScript value="..." /> and use the staticresource's Angular JS source.

PS If you're not already familiar with it, another of the cool resources include in the package is UnderscoreJS.  Lots of cool Javascript utility functions in there I wish I'd known were around years ago.  Regardless, they'll make my current pages easier to write.


Tuesday, May 5, 2015

Working with Salesforce's destructiveChanges.xml

If you've ever had a need to remove a bunch of custom objects, fields, pages, classes, etc. from an org, or from multiple orgs you've probably come across documentation about destructiveChanges.xml.  If you're familiar with developing on the Salesforce platform using Maven's Mate or Eclipse, you're probably already familiar with package.xml.  Both files have nearly identically formats.  The difference between them is package.xml enumerates the stuff you want to synchronize between your org and your development environment and destructiveChanges.xml enumerates the items you want to obliterate (or delete) from whatever org you point it at.


The easiest way to see how they're identical is to look at what each of them looks like empty.

package.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
</Package>

destructiveChanges.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
</Package>

The only difference between them is destructiveChanges doesn't have a <version> tag.

Let's look again after we add a class to each.  In package.xml we're synchronizing a class and in destructiveChanges.xml its a class we want to remove from our org.

package.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
    <types>
        <members>TomTest</members>
        <name>ApexClass</name>
    <types>
</Package>

destructiveChanges.xml
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>TomTest</members>
        <name>ApexClass</name>
    <types>
</Package>

As a percentage, the two files are more similar now than they were before. The only difference between them is still the <version> tag.

Executing destructive changes

So how do we execute destructive changes?  The short answer is using Salesforce's migration tool.  In a few minutes we'll execute "ant undeployCode," but we've a few items to take care of first.

For me, the first problem was where to put the files destructiveChanges.xml and package.xml. The former is new and the latter is NOT the same file that usually appears in the src/ directory.

At Xede, we create git repositories for our projects.  Each repository is forked from xede-sf-template.

DrozBook:git tgagne$ ls -lR xede-sf-template
total 16
-rw-r--r--  1 tgagne  staff   684 Aug 21  2014 README.md
-rwxr-xr-x  1 tgagne  staff  1430 Sep 17  2014 build.xml
drwxr-xr-x  4 tgagne  staff   136 Aug 21  2014 del

xede-sf-template//del:
total 16
-rwxr-xr-x  1 tgagne  staff  563 Jan 17  2014 destructiveChanges.xml
-rw-r--r--  1 tgagne  staff  136 Aug 21  2014 package.xml

The repo includes a directory named "del" (not very imaginative) and inside it are the files destructiveChanges.xml and package.xml.  It seems odd to me, but the migration tool requires both the destructiveChanges.xml AND a package.xml to reside there.

The package.xml file is the same empty version as before.  But the template's destructiveChanges.xml contains placeholders--but still basically does nothing.

DrozBook:xede-sf-template tgagne$ cat del/package.xml
<package xmlns="http://soap.sforce.com/2006/04/metadata">
    <version>29.0</version>
</package>

DrozBook:xede-sf-template tgagne$ cat del/destructiveChanges.xml 
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <name>ApexClass</name>
    </types>
    <types>
        <name>ApexComponent</name>
    </types>
    <types>
        <name>ApexPage</name>
    </types>
    <types>
        <name>ApexTrigger</name>
    </types>
    <types>
        <name>CustomObject</name>
    </types>
    <types>
        <name>Flow</name>
    </types>
    <types>
        <name>StaticResource</name>
    </types>
    <types>
        <name>Workflow</name>
    </types>
</Package>

Now that we have a directory with both files in it, and we have versions of those files that basically do nothing, let's get ready to run the tool.

There's one more file we need to create that's required by the tool, build.xml.  If you're not already using it for deployments you're likely not using it at all.  My version of build.xml is in the parent of del/.  You can see it above in the directory listing of xede-sf-template.

DrozBook:xede-sf-template tgagne$ cat build.xml
<project name="xede-sf-template" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">

    <property environment="env"/>

    <target name="undeployCode">
      <sf:deploy 
 username="${env.SFUSER}" 
 password="${env.SFPASS}" 
 serverurl="${env.SFURL}" 
 maxPoll="${env.SFPOLL}" 
 ignoreWarnings="true"
 checkOnly="${env.CHECKONLY}"
 runAllTests="${env.RUNALLTESTS}"
        deployRoot="del"/>
    </target>

</project>
If 
Since build.xml is in the parent directory to del/ the "deployRoot" attribute is "del," the subdirectory.

The environment property (<property environment.../>) allows operating system environment variables to be substituted inside your build.xml.  In the example above, the environment variables are about what you'd expect them to be (using the bash shell):

export SFUSER=myusername
export SFPASS=mysecretpassword
export SFURL=https://login.salesforce.com (or https://test.salesforce.com)
export SFPOLL=120
export CHECKONLY=false
export RUNALLTESTS=false

Right about now you may be thinking, "Who wants to set all those environment variables?" Truthfully, I don't.  That's why I created a little script to do it for me called "build."  But before we get into that let's just edit our build.xml file so it doesn't need environment variables.

The build.xml below is for a production org.

DrozBook:xede-sf-template tgagne$ cat build.xml
<project name="xede-sf-template" default="usage" basedir="." xmlns:sf="antlib:com.salesforce">

    <target name="undeployCode">
      <sf:deploy 
 username="tgagne+customer@xede.com" 
 password="mysupersecretpassword" 
 serverurl="https://login.salesforce.com" 
 maxPoll="120" 
 ignoreWarnings="true"
 checkOnly="false"
 runAllTests="false"
        deployRoot="del"/>
    </target>

</project>

So now we have our build.xml, our del directory, del/destructiveChanges.xml which lists nothing and an empty del/package.xml file.  Let's run ant.

DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k0SKAQ
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k0SKAQ successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

As you can see, it did nothing.  Let's give it something to do, but make it a class that doesn't exist in the target org.

DrozBook:xede-sf-template tgagne$ cat del/destructiveChanges.xml
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>DoesNotExist</members>
        <name>ApexClass</name>
    </types>
    ... same as before ...
</Package>

I've added a single class, DoesNotExist, to the ApexClass types list and we'll run it again.

DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k0mKAA
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] All warnings:
[sf:deploy] 1.  destructiveChanges.xml -- Warning: No ApexClass named: DoesNotExist found
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k0mKAA successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

Ant (with the migration tool plugin) is telling us it tried removing the Apex class "DoesNotExist" but it didn't exist.  If the class had existed before but had already been removed this is the message it would display.

As a reader exercise, go ahead and create a class "DoesNotExist" in your org.  I went into Setup->Classes->New and entered "public class DoesNotExist{}". It's about as useless a class as you can create, though I've seen and perhaps written worse.

If you run ant again you'll see it doesn't report an error.
DrozBook:xede-sf-template tgagne$ ant undeployCode
Buildfile: /Users/tgagne/git/xede-sf-template/build.xml

undeployCode:
[sf:deploy] Request for a deploy submitted successfully.
[sf:deploy] Request ID for the current deploy task: 0AfU00000034k11KAA
[sf:deploy] Waiting for server to finish processing the request...
[sf:deploy] Request Status: InProgress
[sf:deploy] Request Status: Succeeded
[sf:deploy] *********** DEPLOYMENT SUCCEEDED ***********
[sf:deploy] Finished request 0AfU00000034k11KAA successfully.

BUILD SUCCESSFUL
Total time: 15 seconds

And there you have it!  For a little extra I'll share my "build" script which makes it pretty easy to extract, undeploy (what we just did) and deploy code with or without tests or verification-only.

Tuesday, November 18, 2014

JWT Bearer token flow can be used for community users -- example in cURL

Abstract

Problem
How can community users authenticate to Salesforce via the API without having to give their permission?
Answer
Use the JWT Bearer Token Flow
Disclaimer
I was going to wait a while longer before posting this to make sure it was beautifully formatted and brilliantly written--but that wouldn't have helped anyone trying to solve this problem in the meantime (like I was a few weeks back).

So in the spirit of both this blog's name and agile development, I'm publishing it early, perhaps not often, but hopefully in-time for another developer.
Attributions
Thanks to Jim Rae (@JimRae2009) for suggesting this approach, inspired by his work integrating canvas with node.js on the desktop, and his related Dreamforce 2014 presentation.

Background

A client of ours has an existing, non-salesforce, website with LOTS (tens of thousands) of users.  The client also has a Salesforce ServiceCloud instance they use for all their customer support, and they wanted their customers to interact with the CRM through their website, without iframes or exposing the SF portal to their users. 
The solution is to use JWT Bearer Token Flow.  Salesforce does not support username/password authorization for community-licensed users, and the other OAuth flows require a browser to intermediate between two domains. 
Though the document above does a good job describing the flow, it's a little weak on specifics.  Luckily, there's a reference Apex implementation on github (salesforceidentity/jwt), and below I'll provide a reference implementation using cURL.

Configuring your connected app

But before starting, there's a few things to know about your connected app.

  1. Your connected app must be created inside the target Salesforce instance.  You cannot re-use the same consumer and client values across orgs unless your app is part of a package.
  2. Your connected app must also use digital signatures.  This will require creating a certificate and private key.  Openssl commands for doing this appear later in this article.
  3. You must set the "admin approved users are pre-authorized" permitted users option to avoid login errors.

Configuring your community

  1. The community profile must allow access to the Apex classes that implement your REST interface
  2. Each community user will require the "API Enabled" permission.  This cannot be specified at the profile-level.

Creating the certificate

A single openssl command can create your private key and a self-signed certificate.
openssl req \
    -subj "/C=US/ST=MI/L=Troy/O=Xede Consulting Group, Inc./CN=xede.com" \
    -newkey rsa:2048 -nodes -keyout private.key \
    -x509 -days 3650 -out public.crt
Substitute your own values for the -subj parameter.  It's a self-signed certificate so no one will believe you anyway.  The benefit of using the -subj parameter is to avoid answer the questions about the certificate interactively.
The file "public.crt" is the certificate to load into your connected app on Salesforce.

Creating a community user

If you already have a community user you can skip to the next section.  If you don't, you will need to create one to test with. 
Make sure the user that creates the community user (either from a Contact or an Account if person-accounts are enabled) has a role.  Salesforce will complain when the Contact is enabled for login if the creating user doesn't have a role.

cURL Example

DrozBook:scripts tgagne$ cat jwlogin
#!/bin/bash

if [ $# -lt 2 ]; then
 echo 1>&2 "usage: $0 username sandbox"
 exit 2
fi

export LOGINURL=https://yourportalsite.force.com/optional-part
export CLIENTID='3MVG9Gmy2zm....value from connected-app...OQzJzb4BF469Fkip'

#from https://help.salesforce.com/HTViewHelpDoc?id=remoteaccess_oauth_jwt_flow.htm#create_token

#step 1
jwtheader='{ "alg" : "RS256" }'

#step 2
jwtheader64=`echo -n $jwtheader | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

timenow=`date +%s`
expires=`expr $timenow + 300`

#step3
claims=`printf '{ "iat":%s, "iss":"%s", "aud":"%s", "prn":"%s", "exp":%s }' $timenow $CLIENTID $LOGINURL $1 $expires`

#step4
claims64=`echo -n $claims | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

#step5
token=`printf '%s.%s' $jwtheader64 $claims64`

#step6
signature=`echo -n $token | openssl dgst -sha256 -binary -sign private.key | base64 | tr '+/' '-_' | sed -e 's/=*$//'`

#step7
bigstring=`printf '%s.%s' $token $signature`

curl --silent \
 --data-urlencode grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer \
 --data-urlencode assertion=$bigstring \
 $LOGINURL/services/oauth2/token \
 -o login.answer

Comments on jwlogin

  • LOGINURL is the from the community's Administrative Settings tab.
  • CLIENTID is the connected app's "Consumer Key."
  • The big trick in the script above is step 6, the signing.  The token must be hashed and signed with a single openssl command.

Using the token in subsequent cURL commands

A successful response the authentication request will resemble:
{"scope":"id full custom_permissions api visualforce web openid chatter_api","instance_url":"https://allyservicingcrm--gagne2.cs7.my.salesforce.com","sfdc_community_url":"https://gagne2-gagne2.cs7.force.com/customers","token_type":"Bearer","sfdc_community_id":"0DBM00000008OTXOA2","access_token":"00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi"}
It makes more sense if I pretty-print it.
{
    "scope": "id full custom_permissions api visualforce web openid chatter_api",
    "instance_url": "https:\/\/mydomain--sandboxname.cs7.my.salesforce.com",
    "sfdc_community_url": "https:\/\/communityname-sandboxname.cs7.force.com\/customers",
    "token_type": "Bearer",
    "sfdc_community_id": "0DBM00000008OTXOA2",
    "access_token":  "00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi"
}
The two important pieces of information above are the instance_url and the access_token.

The best way to describe how to use this information is to show you a subsequent curl command before substitution, and after.

Before (source)

DrozBook:scripts tgagne$ cat rerun
#!/bin/bash

INSTANCE=`sed -e 's/^.*"instance_url":"\([^"]*\)".*$/\1/' login.answer`
TOKEN=`sed -e 's/^.*"access_token":"\([^"]*\)".*$/\1/' login.answer`
set -x
curl \
    --silent \
    -H "Authorization: Bearer $TOKEN" \
    "$INSTANCE/services/apexrest/$1"

After

DrozBook:scripts tgagne$ rerun SMInquiry/xyzzy
+ curl --silent -H 'Authorization: Bearer 00DM0000001dHnA!AQsAQIf7Muuu5BDtn8SXgWDJdwFXmvLAoRcqp0jZaObiv_6js.RSjK2ZZOCU29DSPc5s5JfsHdzmQsYpeFEZg7vgj2ynWTvi' https://mydomain--sandboxname.cs7.my.salesforce.com/services/apexrest/SMInquiry/xyzzy


Sunday, December 8, 2013

Simple dependency management for dependent Salesforce objects

Introduction

Salesforce programmers know it is sometime difficult to save multiple objects with dependencies on each other in the right order and with as little effort possible.  The "Collecting Parameter" pattern is an easy way to do this and this article will show you how to use it in your own code.

Unit of Work

In June 2013, FinancialForce's CTO, Andrew Facett, wrote his Unit Of Work article, explaining how a dependency mechanism might be implemented to simplify the saving of multiple objects with dependencies between them.

The problem is a common one for Salesforce programmers--the need to create master and detail objects simultaneously.  Programmers must save the master objects first before their IDs can be set in the detail objects.

An example might be an invoice and its line-items.  To save any InvoiceLine__c, its master object, an Invoice__c, must be saved first.

To solve this problem, Xede uses a pattern popularized by Kent Beck in his 1995 book, Smalltalk Best Practice Patterns, called Collecting Parameter.  For those unfamiliar with the Smalltalk programming language, it can be briefly described as the first object-oriented language where everything is an object.  Numbers, messages, classes, stack frames--everything.  In 1980, (nearly 34 years ago) it also supported continuable exceptions and lambda expressions.    Lest I gush too much about it, I'll say only that nearly all object oriented languages owe their best features to Smalltalk and their worse features to either trying to improve on it or ignoring prior art.

Returning to the subject at-hand, dependency saves, Xede will have created two classes to wrap the objects; Invoice and InvoiceLine.  Each instance of Invoice will aggregate within it the InvoiceLine instances belonging to it.

The code might look something like this.

// create an invoice and add some lines to it
Invoice anInvoice = new Invoice(anInvoiceNumber, anInvoiceDate, aCustomer);
...
// adding details is relatively simple
anInvoice.add(anInvoiceLine);
anInvoice.add(anotherInvoiceLine);
anInvoice.save();

So now our Invoice has two detail instances inside it.  Keeping true to the OO principles of data-hiding and loose-coupling, we can safely ignore how these instances store their sobject variables;  Invoice's Invoice__c and InvoiceLine's InvoiceLine__c.  But without knowing how they store their sobjects, how can we save the master and the detail records with the minimum of two DMLs, one to save the master and another to save the details?

We do it using a collecting parameter.

Collecting Parameter

A collecting parameter is basically a collection of like-things that cooperating classes add to.  Imagine a basket that might get passed to attendees at a charity event.  Each person receiving the collection basket may or may not add cash or checks to it.  In both programming and charity fundraisers it is better manners to let each person add to the basket themselves than to have an usher reach into strangers' pockets and remove cash.  The latter should be regarded as criminal--if not at charity events then in programming.

For programmer's such a thing violates data-hiding; not all classes keep their wallets in the same pocket (variable), some may use money clips rather than wallets, some use purses (collection types), some may have cash while others have checks or coin.  Writing code that will rummage through each class' data looking for cash is nearly impossible--even with reflection.  In the end they all get deposited into a bank account.

Let's first look at the saveTo() methods of Invoice and InvoiceLine.  They are the simplest.

public with sharing class Invoice {
    public getId() { return sobjectData.id; }

    public override void saveTo(list<sobject> aList, list<XedeObject> dependentList)
    {
        aList.add(sobjectData);

        for (InvoiceLine each : lines)
            saveTo(aList, dependentList);
    }

    Invoice__c sobjectData;
    list<InvoiceLine> lines;
}

Invoice knows where it keeps its own reference to Invoice__c (cohesion), so when it comes time to save it simply adds its sobject to the list of sobjects to be saved.  After that, it also knows where it keeps its own list of invoice lines and so calls saveTo() on each of them.

public with sharing class InvoiceLine {
    public override void saveTo(list dependentList) {
        if (sobjectData.parent__c != null)  // if I already have my parent's id I can be saved
            aList.add(sobjectData);

        else if (parent.getId() != null) {  // else if my parent has an id, copy it and I can be saved
            sobjectData.parent__c = parent.getId();
            aList.add(sobjectData);
        }

        else
            dependentList.add(this); // I can't be saved until my parent is saved
    }

    Invoice parent;
    InvoiceLine__c sobjectData;
}

InvoiceLine's implementation is nearly as simple as Invoice's, but subtly different.

Basically, if the InvoiceLine already has its parent's id or can get it's parent's id, then it adds its sobject data to the list to be saved.  If it doesn't have its parent's id then it must wait its turn, and adds itself to the dependent list.

Reader's may wonder why Invoice doesn't decide for itself whether to save its children.  Invoice could skip sending saveTo() to its children if it doesn't have an id, but whether or not its children should be saved is not its decision--it's theirs.  They may have other criteria that must be met before they can be saved.  They may have two master relationships and are waiting for them both.  They may have rows to delete before they can be saved, or may have detail records of their own with other critiera independent of whether Invoice has an id or not.  Whatever the reason may be, the rule is each objects should decide for itself whether it's ready to save, just as it's each person's decisions whether and how much money to put into the collection basket.

In our example below, save() passes two collection baskets; one collects sobjects and another collections instances of classes whose sobjects aren't ready for saving--yet.  save() loops over both lists until they're empty, and in this way is able to handle arbitrary levels of dependencies with the minimum number of DML statements.

Let's look at the base class' (XedeObject) implementation of save().

public virtual class XedeObject {
    public virtual void save() {
        list objectList = new list();
        list dependentList = new list { this };

        do {
            List aList = new List();
            List updateList = new List();
            List insertList = new List();

            objectList = new list(dependentList);
            dependentList.clear();

            for (XedeObject each: objectList)
                each.saveTo(aList, dependentList);

            for (sobject each : aList) {
                if (each.id == null)
                    insertList.add(each);
                else
                    updateList.add(each);
            }

            try {
                update updateList;
                insert insertList;
            } catch (DMLException dmlex)  {
                XedeException.Raise('Error adding or updating object : {0}', dmlex.getMessage());
            }
        } while (dependentList.isEmpty() == false);
    }

    public virtual void saveTo(list anSobjectList, list aDependentList)
    {
        subclassMethodError();
    }
}

To understand how this code works you need to be familiar with subclassing.  Essentially, the classes Invoice and InvoiceLine are both subclasses of XedeObject.  This means they inherit all the functionality of XedeObject.  Though neither Invoice or InvoiceLine implement save(), they will both understand the message because they've inherited its implementation from XedeObject.

The best way to understand what save() does is to walk through "anInvoice.save()."

anInvoice.save() executes XedeObject's save() method because Invoice doesn't have one of its own (remember it's a subclass of XedeObject).  save() begins by adding its own instance to dependentList.  Then it loops over the dependent list, sending saveTo() to each instance, and collecting new dependent objects in the dependent list.

After collecting all the objects it either updates or saves them, then returns to the top of the list of the dependent list isn't empty and restarts the process.

When the dependent list is empty there's nothing else to do and the methods falls off the bottom returning to the caller.

XedeObject also implements saveTo(), but its implementation throws an exception.  XedeObject's subclasses ought to implement saveTo() themselves if they intend to participate in the dependency saves.  If they don't or won't, there's no need to override saveTo().

One of our recent projects was a loan servicing system.  Each loan could have multiple obligors, and each obligor could have multiple addresses.  The system could be given multiple loans at a time to create, and with each batch of loans a log record was recorded.  We had an apiResponse object with a list of loans.  When we called anApiResponse.save(), it's saveTo() sent saveTo() to each of its loans, each loan sent saveTo() to each of its obligors, and each obligor() sent saveTo() to each of its addresses, before apiResponse sent saveTo() to its log class.

In the end, ApiResponse saved the loans, obligors, addresses, and log records with three DML statements--all without anything much more complicated than each class implementing saveTo().

Some programmers may argue that interfaces might have accomplished the same feat without subclassing, but in this case it is not true.  Interfaces don't provide method implementation.  Had we used interfaces then every object would be required to implement save().

Still to do

As useful as save()/saveTo() has proved to be, I can think of a few improvements I'd like to make to it.

First, I'd like to add a delete list.  Some of our operations include deletes, and rather than having each object do its own deletes I'd prefer to collect them into a single list and delete them all at once.

Next, the exception handling around the update and insert should be improved.  DmlException has lots of valuable information we could log or include in our own exception.

Third, I would love to map the DML exceptions with the objects that added them to the list.  save() could then collect all the DML exceptions and send them to the objects responsible for adding them to the list.

Coming up

  • XedeObject has implemented other useful methods we tend to use in multiple of our projects.  Implementing them once in XedeObject and deploying it to each of our customer's orgs saves time, money, and improves consistency across all our projects.  One of these is coalesce().  There are many others.
  • Curl scripts for exercising Salesforce REST services.
  • Using staticresources as a source for unit-test data.
Follow @TomGagne