Pages Menu
Categories Menu

Posted by on Sep 13, 2011 in Code, Mobile, The Cloud, Videos |

Application Development with Android

http://www.youtube.com/watch?v=ZTNRO24-s7g

This is a link to my Dreamforce 2011 session on Application Development with Android. Mobile application development sure is a hot topic these days, and the Android platform is gaining fast, especially with new tablet formats. Every mobile app needs a robust, secure, and capable database. So this coding session will walk you through creating an application on the Android platform by leveraging the power of Database.com. We’ll give you all the details you need to begin participating in the hottest segment of cloud development.

Facebooktwitterredditpinterestlinkedinmail Read More

Posted by on Aug 21, 2011 in Mobile, The Cloud |

oAuth 2.0 for Salesforce.com

At this point in time, we’ve implemented the oAuth 2.0 User-Agent flow and the Refresh Token flow for iOS, Android, and Flex/AS3. I figure that makes us as much an expert at doing this as anybody, so I thought I’d take a moment to describe some of the details. First off, the reason you want to use oAuth 2.0 when developing apps for mobile devices… no token. We’ve been developing mobile apps for Salesforce.com for the last 4 or so years, and the need to provide a username, password, and token has always been a pain point. Since it’s a 24 character alpha-numeric string, this was especially problematic back before iPhones had copy/paste functionality (“is that a l, an I or a 1?”). With oAuth 2.0, you can finally get rid of having to worry about the token.

oAuth 2.0 is a popular universal specification for authentication to various web services. If you’ve used a mobile app that logs into Facebook, Twitter, LinkedIn, or Chatter, you’ve probably used it. for Salesforce.com provides four different authentication flows:

  • Web Server
  • User-Agent
  • Refresh Token
  • Username/Password

A combination of the User-Agent flow and Refresh Token flow are recommended for mobile applications, so that’s what I’ll demonstrate here.

First off, you should understand both flows from a high level:

 

Salesforce Configuration

Both the User-Agent flow and the Refresh Token flow require a Remote Access Application be set up in the target SFDC org. This is configured under Setup=>Develop=>Remote Access. Required fields are Application, Contact Email, and Callback URL. There are a variety of rules about what the Callback URL can be, but the simplest way to do this is to have it be: https://login.salesforce.com/services/oauth2/success

Once saved, SFDC will generate and display a Consumer Key and a Consumer Secret. Both of these will be needed by the application for login.

User-Agent Flow

The User-Agent flow involves the use of a webview within the application. The app passes a special Salesforce.com URL to that webview, which renders a login view.

First, the user will be asked to log in, and then they will be asked to confirm that they would like to provide access to Salesforce.com using this application:

The URL passed to SFDC in order to render this login view is in this format:

https://login.salesforce.com/services/oauth2/authorize?

response_type=token&

display=touch&

client_id=[CONSUMER KEY FROM REMOTE ACCESS]&

redirect_uri=https%3A%2F%2Flogin.salesforce.com%2Fservices%2Foauth2%2Fsuccess

Upon successful login, SFDC will redirect the webview to the URL specified as the redirect_uri (which must be the same as the Callback URL specified in the Remote Access Application setup). After the Callback URL will be a hash tag and then a series of parameters returned by Salesforce:

access_token=[ACCESS TOKEN (Session Id)]

&refresh_token=[REFRESH TOKEN]

&instance_url=https%3A%2F%2Fna1.salesforce.com

&id=https%3A%2F%2Flogin.salesforce.com%2Fid%[ORG ID]%[USER ID]

&issued_at=1312403866216

&signature=[SIGNATURE]

The Access Token specified here is the Session ID that will be used for all subsequent calls to the API. The Refresh Token must be saved securely to disk, as it will be used in conjunction with the Consumer Key and Consumer Secret to get a new Access Token from SFDC when the current on expires. The Org Id, User Id, Issued At Time (number of milliseconds since the Unix Epoch), and Signature should be saved as well.

Refresh Token Flow

At some point in time, your Access Token will expire. This may come as a shock, so be sure to prepare your friends and family. The app will learn that the session ID has expired when it attempts to access the API and the response is either:

  • SOAP API: HTTP 500 Internal Server Error, with a faultCode: <faultcode>sf:INVALID_SESSION_ID</faultcode>
  • REST API: HTTP 401 Unauthorized

The amount of time a session ID remains valid is configured under Security Controls => Session Settings  in SFDC Setup. When it expires, the app will have to use the Refresh Token flow to request another Access Token from SFDC. To do this, the application will send a POST request to SFDC including the Refresh Token, the Consumer Key, and the Consumer Secret. SFDC will respond with a new Access Token.

NOTE: At no time does the application store the Username or Password of the individual logging into the app.

So, that’s it. I hope you’ve enjoyed this foray into the world of oAuth 2.0 and Salesforce.com.

 

 

Facebooktwitterredditpinterestlinkedinmail Read More

Posted by on Jul 18, 2011 in The Cloud |

Setting Up and Using DiffDog for Salesforce.com Deployment Validation

There are a few different ways to deploy metadata from org to org with Salesforce.com. The three main options are to use Eclipse, to use Ant (the “Force.com Migration Tool”), or to use Change Sets. The first two are completely manual to set up (although Ant, obviously, is able to be run over and over again). Change Sets have a lot of promise, because they do handy things like searching for dependencies, but as of this writing, they are still prone to missing important bits, especially with profiles, so you can’t rely on them to produce a perfect deploy from one org to another. Consequently, it’s important to be able to quickly validate that a deployment was successful, and that everything that you meant to deploy from one org to another actually did get deployed.

Enter DiffDog…

DiffDog is a great tool for validating that the metadata between two orgs is identical, and, when used in conjunction with Eclipse, it can be used to push changes from one org to another. It can be used to compare any of the metadata types that are able to be checked out using Eclipse: Objects, Page Layouts, Profiles, Workflow, Reports, etc. The main benefit with this tool over other diff tools is that it allows for the comparison of XML files with ignoring the order of XML nodes. This is important because the metadata between two orgs is XML based, and can be functionally identical, but rendered in different orders. Because of this, with a regular flat-file diff tool, you will get lots of false positives. DiffDog can be configured to properly compare XML files, thus eliminating these false positives. This post describes some optimal settings for use with SFDC, and the process for comparing orgs and deploying changes.

To start with, download the tool from Altova: 

http://www.altova.com/download/diffdog/diff_merge_tool_professional.html

Setup

Once you’ve downloaded the app and registered it, go to the Tools menu, choose “Comparison Options”, and select the XML tab. This part is important: you want to make sure the comparison ignores the order of child nodes. This basically means that XML nodes can be rendered in any order and still be considered identical. Click the “Ignore order of child nodes” box in the Order section. All of the other options should be default, but double-check to make sure they match this screenshot.

File:DiffDog1.jpg


Additionally, if “quick” diff is turned on for folder comparison (it will be by default), make sure to turn it off:




You want to do extension-based comparison (EXT):



Check Out the Orgs

You will now have to check out the metadata objects that you want to compare from both SFDC orgs using Eclipse. Let’s assume one org is a sandbox and one is production. Note that if you want to compare profiles, you will need to select all of the metadata types for everything that you want to compare profile permissions for. For instance, if you want to compare Field-Level Security on Custom Objects, you will need to check out Profile metadata AND Custom Object metadata. SFDC only sends the profile metadata for the metadata types that you have checked out. If you try to do only profiles, the files will be practically empty.

File:DiffDog4.jpg


The metadata from the two orgs that you checked out will be located in your Workspace directory. You can generally figure out where this is by right-clicking on one of the files in Eclipse, and selecting “Properties”.

Using DiffDog

You’ll then want to pick a metadata type and open the two org’s folders for that type in DiffDog. For instance, to compare Objects, select “Compare Directories” from the File Menu…

File:DiffDog5.jpg


…and then select a metadata folder for each org. I’d suggest putting your sandbox org on the left and your production org on the right so that you’re moving changes from left to right, but you can do it either way.



File:DiffDog7.jpg

Once you’ve done this, DiffDog will initiate a high-level diff of all files in the directories, and will display something like this. Lines displayed in black are identical, lines displayed in Red have differences, and lines in blue are missing in one org or the other.


If you need to move an entire object over, you can do that here by clicking on the blue name and pressing the “Copy from left to right” or “Copy from right to left” button, depending on which direction you want to go.

File:DiffDog9.jpg


If you want to inspect the differences between two files, double-click on one, and it will launch a flat file-based diff that looks something like this:



This is not what you want. You want to select the “Grid View” tab at the bottom left of the window. This launches the grid-based diff tool that will show you differences between the two metadata objects:



Differences are highlighted in light green, and the “current difference” is highlighted in a darker green. To move a change from one org to another, you’ll have to click on the box in the grid in the org you want to move FROM, and then click the “Make current difference” button in the top toolbar (or hit Alt-Enter) to highlight it in dark green.

File:DiffDog12.jpg


Once you’ve done this you can copy the change over with the “Copy from Left to Right” button:

File:DiffDog13.jpg


Deploying Your Changes

You’ll then want to save (Ctrl-S or File=>Save). This will save your changes locally. Note that they have not yet been deployed to SFDC. To do this, you’ll have to go back to Eclipse. Find the file (or group of files) that you saved in your Eclipse project, right click on it, and select “Refresh”. This will cause Eclipse to attempt to deploy your changes to SFDC. This could result in one or more errors, so be sure to watch the Problems tab for any errors. If you’re deploying to Production, this step can take some time if the org has a lot of Apex code, because all tests will be re-run when you deploy. A minute or so is common. 10-15 minutes isn’t unheard of.


Facebooktwitterredditpinterestlinkedinmail Read More

Posted by on Apr 26, 2011 in The Cloud |

The day the cloud stood still. Lessons learned roundup…

The well-publisized outage of EBS on multiple availability zones in the US-EAST-1 Region of AWS last week kicked off some excellent blog posts from companies who, through robust architectural choices, managed to weather the storm quite well. It lasted five days, it’s been called the worst cloud computing disaster ever, and Amazon’s communications strategy didn’t exactly shine, but it has presented an opportunity to learn from the companies that are hosting their sites on the AWS cloud better than many of their peers.

This is just a round-up of some of these posts, and the advice given. They’ve been edited down, of course, so be sure to read each of these articles for the whole story:

The Cloud is Not a Silver Bullet — Joe Stump, CTO of SimpleGeo

  • Everything needs to be automated. Spinning up new instances, expanding your clusters, backups, restoring from backups, metrics, monitoring, configurations, deployments, etc. should all be automated.
  • You must build share-nothing services that span AZs at a minimum. Preferably your services should span regions as well, which is technically more difficult to implement, but will increase your availability by an order of magnitude.
  • An avoidance of relying on ACID services. It’s not that you can’t run MySQL, PostgreSQL, etc. on the cloud, but the ephemeral and distributed nature of the cloud make this a much more difficult feature to sustain.
  • Data must be replicated across multiple types of storage. If you run MySQL on top of RDS, you should be replicating to slaves on EBS, RDS multi-AZ slaves, ephemeral drives, etc. Additionally, snapshots and backups should span regions. This allows entire components to disappear and you to either continue to operate or restore quickly even if a major AWS service is down.
  • Application-level replication strategies. To truly go multi-region, or to span across cloud services, you’ll very likely have to build replication strategies into your application rather than relying those inherent in your storage systems.

How SmugMug survived the Amazonpocalypse — Don MacAskill, CEO of SmugMug

  • Spread across as many AZs as you can. Use all four.
  • If your stuff is truly mission critical (banking, government, health, serious money maker, etc), spread across as many Regions as you can.
  • Beyond mission critical? Spread across many providers.
  • Since spreading across multiple Regions and providers adds crazy amounts of extra complexity, and complex systems tend to be less stable, you could be shooting yourself in the foot unless you really know what you’re doing.
  • Build for failure. Each component (EC2 instance, etc) should be able to die without affecting the whole system as much as possible.
  • Understand your components and how they fail. Use any component, such as EBS, only if you fully understand it. For mission-critical data using EBS, that means RAID1/5/6/10/etc locally, and some sort of replication or mirroring across AZs, with some sort of mechanism to get eventually consistent and/or re-instantiate after failure events.
  • Try to componentize your system. Why take the entire thing offline if only a small portion is affected?
  • Test your components. I regularly kill off stuff on EC2 just to see what’ll happen.

AWS outage timeline & downtimes by recovery strategy — Eric Kidd, Randomhacks.net

Eric took an interesting look at various potential strategies, and how long a company would have been offline during the EBS outage:

  • Rely on a single EBS volume with no snapshots: 3.5 days
  • Deploy into a single availability zone, with EBS snapshots: over 12 hours
  • Rely on multi-AZ RDS databases to fail over to another availability zone: longer than 14 hours for some users.
  • Run in 3 AZs, at no more than 60% capacity in each: This is the approach taken by Netflix, which sailed through this outage without no known downtime
  • Replicate data to another AWS region or cloud provider: This is still the gold standard for sites which require high uptime guarantees.

The AWS Outage: The Cloud’s Shining Moment — George Reese, Founder of Valtira and enStratus

The Amazon model is the “design for failure” model. Under the “design for failure” model, combinations of your software and management tools take responsibility for application availability. The actual infrastructure availability is entirely irrelevant to your application availability. 100% uptime should be achievable even when your cloud provider has a massive, data-center-wide outage…

There are several requirements for “design for failure”:

  • Each application component must be deployed across redundant cloud components, ideally with minimal or no common points of failure
  • Each application component must make no assumptions about the underlying infrastructure—it must be able to adapt to changes in the infrastructure without downtime
  • Each application component should be partition tolerant—in other words, it should be able to survive network latency (or loss of communication) among the nodes that support that component
  • Automation tools must be in place to orchestrate application responses to failures or other changes in the infrastructure (full disclosure, I am CTO of a company that sells such automation tools, enStratus)

Today’s EC2 / EBS Outage: Lessons learned — Stephen Nelson-Smith, Technical Director of Atalanta Systems

  • Expect downtime…What matters is how you respond to downtime
  • Use amazon’s built-in availability mechanisms
  • Think about your use of EBS:
    • EBS is not a SAN
    • EBS is multi-tenant…Consider using lots of volumes and building up your own RAID 10 or RAID 6 from EBS volumes.
    • Don’t use EBS snapshots as a backup…Although they are available to different availabilty zones in a given region, you can’t move them between regions.
    • Consider not using EBS at all
  • Consider building towards a vendor-neutral architecture…Cloud abstraction tools like Fog, and configuration management frameworks such as Chef make the task easier.
  • Have a DR plan, and practice it
  • Infrastructure as code is hugely relevant…one of the great enablers of the infrastructure as code paradigm is the ability to rebuild the business from nothing more than a source code repository, some new compute resource (virtual or physical) and an application data backup.
Facebooktwitterredditpinterestlinkedinmail Read More

Posted by on Apr 21, 2011 in The Cloud |

It’s Not Broken. You’re Just Doing It Wrong.

Okay, so the title is a bit harsh.

I was intrigued by the rather excellent post over at the blog Il y a du thé renversé au bord de la table, [Rant] Web development is just broken. Yoric makes the argument that web developers are forced to deal with too many “nightmares” that have very little to do with programming. First you have to decide on a programming language. Should you use PHP, C#, Java, Ruby, Perl, or Python? Then you have to choose a web server and OS. Windows/IIS or *nix and Apache? OSX? BSD? Solaris? If you go with Linux, which distro do you choose? Is it worth it to pay for Red Hat, or will Fedora do? What about Ubuntu? Then you have to choose a DBMS, of course. Do you want Oracle? Well, can you afford Oracle? Then there’s MySQL, SQLServer, or PostgreSQL. Or maybe one of the NoSQL databases like MongoDB, CouchDB, or Cassandra. And then you probably want to choose a server-side framework. Rails? Spring? Zend? And a client-side framework, of course, so you don’t have to worry too much about all the differences between the JS engines in each different browser. JQuery? Prototype? Scriptaculous?

And then, once everything is selected, it all has to be configured to work together without (too many) security holes. But, of course, how much does the average developer really know about configuring a secure Linux environment with Apache? Or setting up a secure IIS? And even if the developer does know a lot about configuring all of this, wouldn’t it be more productive to have him or her focused on developing actual application features rather mucking around in Apache2.conf or php.ini, or trying to figure out why their package manager can’t find the right package for some random server component? How do I configure CPAN, again? Do I really need the Multiverse, or will the Universe do? Then, of course, you’ll probably want an ORM, and you’ll need to decide on how you want to glue all the bits and pieces together.

Not to mention keeping all of that up to date and working as new releases get rolled out… oh, and what about scaling up to meet the increased demand if you start to get really popular and get bought by Conde Nast?

Great points. Couldn’t agree more. Anybody guess where I’m going with this?

Tired of worrying about infrastructure? You want to start coding now? Great, take a look at Elastic BeanstalkHeroku, or Force.com VMForce (yeah, I know, “coming soon”). No infrastructure setup required. You still have to choose a language and a platform, I guess, but that seems unavoidable. You have to make some choices in life. However, you don’t have to care about which OS or web server to use, and you don’t have to manage updates of server software. AWS might all be running in VMWare within a virtualized Windows 98 stack based on a billion hand-built Commodore 64s for all I care. As long as it works. And the DBMS is a service too… you don’t have to set it up, you just pick whichever one you want. When VMForce is launched, you’ll have database.com as a DBMS. With Elastic Beanstalk, you have RDS or SimpleDB. With Heroku, you have PostgreSQL out of the box, with a ton of other choices available, but you don’t set them up yourself, you just add them to your account, and they get set up for you.

What about security? Does your data center have 24-hour manned security, including foot patrols and perimeter inspections? Well, Salesforce does. Is your server certified by PCI, ISO, SAS70, and HIPAA? Well, AWS is, and Heroku is hosted on AWS, and they have their own operations team that monitors the system 24/7. Even Multi-Factor Authentication is just another service at AWS. And if somebody finds a security flaw in any of these platforms, it’s not your problem. Somebody else can figure it out and fix it, hopefully before you even know about it. Of course, it’s still important to write secure code, sanitize user inputs, parameterize SQL queries, etc., but at least that’s all in _your_ code. You can focus on writing good code, and not on whether or not you accidentally configured an Apache mod incorrectly, or accidentally allowed anonymous FTP access to your web server, or if your version of PHP has a buffer overrun bug that will allow some random hacker to drop your User table.

You’ll probably still need to glue some things together, and if you’re doing web development, you’ll still want a client-side framework so you don’t have to worry too much about all the various inconsistencies between browsers, but with the infrastructure headaches out of the picture, it’s easier to just start coding.

Facebooktwitterredditpinterestlinkedinmail Read More