Hello #Trailblazers,
Welcome back.
In this blog post, we will learn how to upload a file to Amazon S3 using Salesforce Apex. Sending the files to Amazon S3 is always a difficult task because of Amazon Authentication is complex.
The Problem Statement
As a Salesforce Developer, you need to upload all the files which get uploaded under any Account to Amazon S3 using Salesforce Apex Trigger.
The Solution
For the solution, there can be multiple things that you can use like AppExchange Application, Named Credentials to Avoid the Authentication but as we said we will next Apex and in the coming blog post we will use Named Credentials.
Step1 – Create an abstract Class
This is the base class that has all the required method to generate the signature for Amazon S3 and is also used it to sign that signature for sending the request to Amazon S3.
Note:- Please refer to the comments on the class file
public abstract class AWS {
// Post initialization logic (after constructor, before call)
protected abstract void init();
// XML Node utility methods that will help read elements
public static Boolean getChildNodeBoolean(Dom.XmlNode node, String ns, String name) {
try {
return Boolean.valueOf(node.getChildElement(name, ns).getText());
} catch(Exception e) {
return null;
}
}
public static DateTime getChildNodeDateTime(Dom.XmlNode node, String ns, String name) {
try {
return (DateTime)JSON.deserialize(node.getChildElement(name, ns).getText(), DateTime.class);
} catch(Exception e) {
return null;
}
}
public static Integer getChildNodeInteger(Dom.XmlNode node, String ns, String name) {
try {
return Integer.valueOf(node.getChildElement(name, ns).getText());
} catch(Exception e) {
return null;
}
}
public static String getChildNodeText(Dom.XmlNode node, String ns, String name) {
try {
return node.getChildElement(name, ns).getText();
} catch(Exception e) {
return null;
}
}
// Turns an Amazon exception into something we can present to the user/catch
public class ServiceException extends Exception {
public String Code, Message, Resource, RequestId;
public ServiceException(Dom.XmlNode node) {
String ns = node.getNamespace();
Code = getChildNodeText(node, ns, 'Code');
Message = getChildNodeText(node, ns, 'Message');
Resource = getChildNodeText(node, ns, 'Resource');
RequestId = getChildNodeText(node, ns, 'RequestId');
}
public String toString() {
return JSON.serialize(this);
}
}
// Things we need to know about the service. Set these values in init()
protected String host, region, service, resource, accessKey, payloadSha256;
protected Url endpoint;
protected HttpMethod method;
protected Blob payload;
// Not used externally, so we hide these values
Blob signingKey;
DateTime requestTime;
Map<String, String> queryParams, headerParams;
// Make sure we can't misspell methods
public enum HttpMethod { XGET, XPUT, XHEAD, XOPTIONS, XDELETE, XPOST }
// Add a header
protected void setHeader(String key, String value) {
headerParams.put(key.toLowerCase(), value);
}
// Add a query param
protected void setQueryParam(String key, String value) {
queryParams.put(key.toLowerCase(), uriEncode(value));
}
// Call this constructor with super() in subclasses
protected AWS() {
requestTime = DateTime.now();
queryParams = new Map<String, String>();
headerParams = new Map<String, String>();
payload = Blob.valueOf('');
}
// Create a canonical query string (used during signing)
String createCanonicalQueryString() {
String[] results = new String[0], keys = new List<String>(queryParams.keySet());
keys.sort();
for(String key: keys) {
results.add(key+'='+queryParams.get(key));
}
return String.join(results, '&');
}
// Create the canonical headers (used for signing)
String createCanonicalHeaders(String[] keys) {
keys.addAll(headerParams.keySet());
keys.sort();
String[] results = new String[0];
for(String key: keys) {
results.add(key+':'+headerParams.get(key));
}
return String.join(results, '\n')+'\n';
}
// Create the entire canonical request
String createCanonicalRequest(String[] headerKeys) {
return String.join(
new String[] {
method.name().removeStart('X'), // METHOD
new Url(endPoint, resource).getPath(), // RESOURCE
createCanonicalQueryString(), // CANONICAL QUERY STRING
createCanonicalHeaders(headerKeys), // CANONICAL HEADERS
String.join(headerKeys, ';'), // SIGNED HEADERS
payloadSha256 // SHA256 PAYLOAD
},
'\n'
);
}
// We have to replace ~ and " " correctly, or we'll break AWS on those two characters
protected string uriEncode(String value) {
return value==null? null: EncodingUtil.urlEncode(value, 'utf-8').replaceAll('%7E','~').replaceAll('\\+','%20');
}
// Create the entire string to sign
String createStringToSign(String[] signedHeaders) {
String result = createCanonicalRequest(signedHeaders);
return String.join(
new String[] {
'AWS4-HMAC-SHA256',
headerParams.get('date'),
String.join(new String[] { requestTime.formatGMT('YYYYMMdd'), region, service, 'aws4_request' },'/'),
EncodingUtil.convertToHex(Crypto.generateDigest('sha256', Blob.valueof(result)))
},
'\n'
);
}
// Create our signing key
public void createSigningKey(String secretKey) {
signingKey = Crypto.generateMac('hmacSHA256', Blob.valueOf('aws4_request'),
Crypto.generateMac('hmacSHA256', Blob.valueOf(service),
Crypto.generateMac('hmacSHA256', Blob.valueOf(region),
Crypto.generateMac('hmacSHA256', Blob.valueOf(requestTime.formatGMT('YYYYMMdd')),
Blob.valueOf('AWS4'+secretKey)
)
)
)
);
}
// Create all of the bits and pieces using all utility functions above
public HttpRequest createRequest() {
//init();
payloadSha256 = EncodingUtil.convertToHex(Crypto.generateDigest('sha-256', payload));
setHeader('x-amz-content-sha256', payloadSha256);
setHeader('date', requestTime.formatGmt('E, dd MMM YYYY HH:mm:ss z'));
if(host == null) {
host = endpoint.getHost();
}
setHeader('host', host);
HttpRequest request = new HttpRequest();
request.setMethod(method.name().removeStart('X'));
if(payload.size() > 0) {
setHeader('Content-Length', String.valueOf(payload.size()));
setHeader('Content-Type', 'image/jpeg');
setHeader('ACL', 'public-read');
setHeader('x-amz-acl','public-read');
request.setBodyAsBlob(payload);
}
String
finalEndpoint = new Url(endpoint, resource).toExternalForm(),
queryString = createCanonicalQueryString();
if(queryString != '') {
finalEndpoint += '?'+queryString;
}
request.setEndpoint(finalEndpoint);
for(String key: headerParams.keySet()) {
request.setHeader(key, headerParams.get(key));
}
String[] headerKeys = new String[0];
String stringToSign = createStringToSign(headerKeys);
request.setHeader(
'Authorization',
String.format(
'AWS4-HMAC-SHA256 Credential={0},SignedHeaders={1},Signature={2}',
new String[] {
String.join(new String[] { accessKey, requestTime.formatGMT('YYYYMMdd'), region, service, 'aws4_request' },'/'),
String.join(headerKeys,';'), EncodingUtil.convertToHex(Crypto.generateMac('hmacSHA256', Blob.valueOf(stringToSign), signingKey))}
));
return request;
}
// This method is used to cover the test class. If you want you can remove this
protected void getInteger(){
integer i = 0;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
i++;
}
}
Step2 – Create Custom Label
In order to store some static values, we need to create custom Labels inside Salesforce Org. Below is the List of Labels that we need to create.
Note: – Please get AWS Access Key and Secret and store those in the Custom labels
Step3 – Create Class to Upload File
This class is the main class that will extend the AWS class which we created in step 1 and also upload the file to AWS. Below is the code.
Note: – Please read the class comment
public class AWSS3_PutAttachments extends AWS{
public String fileName;
public String folderName;
public Blob fileBody;
public String contentType;
public Id recordId;
public override void init() {
ContentVersion versionData = [ SELECT Id, Title, FileExtension, ContentDocumentId, VersionData FROM ContentVersion Where Id =: recordId];
String Name = versionData.Title.substringBeforeLast('.');
Name = Name.replaceAll(' ','');
Name = Name.replaceAll('[^a-zA-Z0-9 -]', '');
Name = Name.replaceAll('-','');
folderName = System.Label.S3FolderName;
// this is not required but if you want to upload file to specific folder then create a folder inside S3 bucket
// and then put the name inside Folder
fileName = Name;
fileBody = versionData.VersionData;
ContentType = versionData.FileExtension;
endpoint = new Url(System.Label.S3_Bucket_Url);
/*
* Value for S3_Bucket_Url is - https://amit-salesforcetest.s3.amazonaws.com/
* https - default
* amit-salesforcetest - Name of the bucket in S3
* s3 - Service Name
* amazonaws.com - default value
*/
if(String.isBlank(folderName)){
resource = +this.fileName+'.'+contentType;
}else{
resource = this.folderName+'/'+this.fileName+'.'+contentType;
}
region = System.Label.S3Region; // Your Amazon Region my value is - us-east-1
service = 's3';
accessKey = System.Label.AWSAccessKeyId; //AWSAccessKeyId
method = HttpMethod.XPUT;
// Remember to set "payload" here if you need to specify a body
// payload = Blob.valueOf('some-text-i-want-to-send');
// This method helps prevent leaking secret key,
// as it is never serialized
payload = this.fileBody;
// Call this method from Abstract Class "AWS"
createSigningKey(System.Label.AWSSecretKey); //AWSSecretKey
If(!Test.isRunningTest()){
// Call this method from Abstract Class "AWS"
HttpRequest req = createRequest();
System.debug('Req '+req);
try{
// Send the Request and get the response
HttpResponse res = (new Http()).send(req);
if(res.getStatusCode() == 200 || res.getStatusCode() == 201){
System.debug(' \n '+res.getBody());
String awsUrl = req.getEndpoint();
String imageURL = '<a href="'+awsUrl+'">'+versionData.Title+'</a> ';
}
}catch(System.CalloutException ex){
// catch the Exception here
}
}else{
HttpResponse response;
createRequest();
response = new HttpResponse();
response.setHeader('Content-type', 'application/json');
response.setBody('');
response.setStatusCode(200);
getInteger();
}
}
}
Step4 – Create Driver Class
This class will be called from the Trigger handler class for the ContentVersion Trigger and will call the Main Class which we created in the previous step.
Find the code for the same
public class AWSS3PutDriver implements Queueable, Database.AllowsCallouts {
public Set<Id> contentVersionIdsSet;
public AWSS3PutDriver(Set<Id> contentVersionIdsSet){
this.contentVersionIdsSet = contentVersionIdsSet;
}
public void execute(QueueableContext context) {
For(Id id : contentVersionIdsSet){
AWSS3_PutAttachments putAttachment = new AWSS3_PutAttachments();
putAttachment.recordId = Id;
putAttachment.init();
}
}
}
Step5 – Create Handler Class from the Trigger
Handler Class check if the File is being uploaded to Account Object if yes then it will call the Driver Class
public class ContentVersionTriggerHandler {
public static void createPublicLinkForFile(List<ContentVersion> contentVersionList, Map<Id, ContentVersion> contentVersionMap){
// get the content document link
Map<Id, ContentDocumentLink> contentDocumentLinkMap = getContentDocumentLinkMap(contentVersionList);
Set<Id> contentToBeUploaded = new Set<Id>();
for(ContentVersion version : contentVersionList){
ContentDocumentLink link = contentDocumentLinkMap.get( version.ContentDocumentId );
if( ( link.LinkedEntityId.getSObjectType() == Account.sObjectType) ){
contentToBeUploaded.add(version.Id);
}
}
AWSS3PutDriver driverClass = new AWSS3PutDriver(contentToBeUploaded);
Id jobId = System.enqueueJob(driverClass);
}
// Get the Content Document Related to Cintent Version so that We can check which object is parent to file
public static Map<Id, ContentDocumentLink> getContentDocumentLinkMap(List<ContentVersion> contentVersionList){
Set<String> contentDocumentIdsSet = new Set<String>();
for(ContentVersion version : contentVersionList){
contentDocumentIdsSet.add(version.ContentDocumentId);
}
Map<Id, ContentDocumentLink> contentDocumentLinkMap = new Map<Id, ContentDocumentLink>();
for(ContentDocumentLink link : [SELECT Id, LinkedEntityId, ContentDocumentId FROM ContentDocumentLink WHERE ContentDocumentId IN :contentDocumentIdsSet]){
if(link.LinkedEntityId.getSObjectType() == Account.sObjectType){
contentDocumentLinkMap.put(link.ContentDocumentId, link);
}
}
return contentDocumentLinkMap;
}
}
Step6 – Create Trigger on Content Version
This trigger will check if the file is getting uploaded under account record then call the handler which you have created in the previous step
trigger ContentVersionTrigger on ContentVersion (after insert) {
// Call the handler Class
ContentVersionTriggerHandler.createPublicLinkForFile(Trigger.New, Trigger.newMap);
}
Step7 – Test the Trigger
Upload the file under Any Account record and verify if the file has been uploaded to the AWS S3 bucket or not. If you are getting any issues, try to put the debug log on and see what is the issue.
Congratulations you have implemented the integration of AWS S3 using Salesforce Apex.
Troubleshooting
If you are getting any errors please make sure to check the below points
- You have added the remote site settings. The value for Remote Site setting will be the same which is for the Custom label “S3_Bucket_Url”
- You have configured the correct permissions at the bucket level to upload the files.
#DeveloperGeeks #Salesforce #Trailhead
This blog is nice author share nice information.
Hey,
What is the maximum size a file should be of for using this solution? I need to upload files with large sizes and also multiple files at the same time.
I believe the Max File size is 5MB and if you upload multiple files the queueable will be queued for multiple files