I need some help for uploading large files into s3 bucket from salesforce apex server side.
I need to be able to split a blob and upload it to aws s3 bucket using Http PUT operation. I am able to do that upto 12 MB file in a single upload because that is the PUT request body size limit in Apex . So i need to be able to upload using multipart operation. I noticed s3 allows to upload in parts and gives back a uploadId. wondering if anyone has already done this before in salesforce apex code. it would be greatly appreciated.
Thanks in advance Parbati Bose.
Here is the code
public with sharing class AWSS3Service {
private static Http http;
@auraEnabled
public static void uploadToAWSS3( String fileToUpload , String filenm , String doctype){
fileToUpload = EncodingUtil.urlDecode(fileToUpload, 'UTF-8');
filenm = EncodingUtil.urlEncode(filenm , 'UTF-8'); // encode the filename in case there are special characters in the name
String filename = 'Storage' + '/' + filenm ;
String attachmentBody = fileToUpload;
String formattedDateString = DateTime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z');
// s3 bucket!
String key = '**********' ;
String secret = '********' ;
String bucketname = 'testbucket' ;
String region = 's3-us-west-2' ;
String host = region + '.' + 'amazonaws.com' ; //aws server base url
try{
HttpRequest req = new HttpRequest();
http = new Http() ;
req.setMethod('PUT');
req.setEndpoint('https://' + bucketname + '.' + host + '/' + filename );
req.setHeader('Host', bucketname + '.' + host);
req.setHeader('Content-Encoding', 'UTF-8');
req.setHeader('Content-Type' , doctype);
req.setHeader('Connection', 'keep-alive');
req.setHeader('Date', formattedDateString);
req.setHeader('ACL', 'public-read-write');
String stringToSign = 'PUT\n\n' +
doctype + '\n' +
formattedDateString + '\n' +
'/' + bucketname + '/' + filename;
Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret));
String signed = EncodingUtil.base64Encode(mac);
String authHeader = 'AWS' + ' ' + key + ':' + signed;
req.setHeader('Authorization',authHeader);
req.setBodyAsBlob(EncodingUtil.base64Decode(fileToUpload)) ;
HttpResponse response = http.send(req);
Log.debug('response from aws s3 is ' + response.getStatusCode() + ' and ' + response.getBody());
}catch(Exception e){
Log.debug('error in connecting to s3 ' + e.getMessage());
throw e ;
}
}
The AWS SDK for Java exposes a high-level API, called TransferManager, that simplifies multipart uploads (see Uploading Objects Using Multipart Upload API). You can upload data from a file or a stream. You can also set advanced options, such as the part size you want to use for the multipart upload, or the number of concurrent threads you want to use when uploading the parts. You can also set optional object properties, the storage class, or the ACL. You use the PutObjectRequest and the TransferManagerConfiguration classes to set these advanced options.
Here is the sample code from https://docs.aws.amazon.com/AmazonS3/latest/dev/HLuploadFileJava.html.
You can adapt to your Salesforce Apex code:
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
import com.amazonaws.services.s3.transfer.Upload;
import java.io.File;
public class HighLevelMultipartUpload {
public static void main(String[] args) throws Exception {
Regions clientRegion = Regions.DEFAULT_REGION;
String bucketName = "*** Bucket name ***";
String keyName = "*** Object key ***";
String filePath = "*** Path for file to upload ***";
try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
// TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");
// Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion();
System.out.println("Object upload complete");
} catch (AmazonServiceException e) {
// The call was transmitted successfully, but Amazon S3 couldn't process
// it, so it returned an error response.
e.printStackTrace();
} catch (SdkClientException e) {
// Amazon S3 couldn't be contacted for a response, or the client
// couldn't parse the response from Amazon S3.
e.printStackTrace();
}
}
}
이 기사는 인터넷에서 수집됩니다. 재 인쇄 할 때 출처를 알려주십시오.
침해가 발생한 경우 연락 주시기 바랍니다[email protected] 삭제
몇 마디 만하겠습니다