Destinations
S3

Amazon S3

Amazon S3 — the Simple Storage Service — is a common place to dump data for long-term storage on AWS. Pipedream supports delivery to S3 as a first-class Destination.

Using $.send.s3 in workflows

You can send data to an S3 Destination in Node.js code steps using $.send.s3().

$.send.s3() takes the following parameters:

$.send.s3({
  bucket: "your-bucket-here",
  prefix: "your-prefix/",
  payload: event.body,
});

Like with any $.send function, you can use $.send.s3() conditionally, within a loop, or anywhere you'd use a function normally.

Using $.send.s3 in component actions

If you're authoring a component action, you can deliver data to an S3 destination using $.send.s3.

$.send.s3 functions the same as $.send.s3 in workflow code steps:

async run({ $ }) {
  $.send.s3({
    bucket: "your-bucket-here",
    prefix: "your-prefix/",
    payload: event.body,
  });
}

S3 Bucket Policy

In order for us to deliver objects to your S3 bucket, you need to modify its bucket policy to allow Pipedream to upload objects.

Replace [your bucket name] with the name of your bucket near the bottom of the policy.

{
  "Version": "2012-10-17",
  "Id": "allow-pipedream-limited-access",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::203863770927:role/Pipedream"
      },
      "Action": [
        "s3:AbortMultipartUpload",
        "s3:GetBucketLocation",
        "s3:PutObject",
        "s3:PutObjectAcl",
        "s3:ListBucketMultipartUploads"
      ],
      "Resource": [
        "arn:aws:s3:::[your bucket name]",
        "arn:aws:s3:::[your bucket name]/*"
      ]
    }
  ]
}

This bucket policy provides the minimum set of permissions necessary for Pipedream to deliver objects to your bucket. We use the Multipart Upload API to upload objects, and require the relevant permissions.

S3 Destination delivery

S3 Destination delivery is handled asynchronously, separate from the execution of a workflow. Moreover, events sent to an S3 bucket are batched and delivered once a minute. For example, if you sent 30 events to an S3 Destination within a particular minute, we would collect all 30 events, delimit them with newlines, and write them to a single S3 object.

In some cases, delivery will take longer than a minute.

S3 object format

We upload objects using the following format:

[PREFIX]/YYYY/MM/DD/HH/YYYY-MM-DD-HH-MM-SS-IDENTIFIER.gz

That is — we write objects first to your prefix, then within folders specific to the current date and hour, then upload the object with the same date information in the object, so that it's easy to tell when it was uploaded by object name alone.

For example, if I were writing data to a prefix of test/, I might see an object in S3 at this path:

test/2019/05/25/16/2019-05-25-16-14-58-8f25b54462bf6eeac3ee8bde512b6c59654c454356e808167a01c43ebe4ee919.gz

As noted above, a given object contains all payloads delivered to an S3 Destination within a specific minute. Multiple events within a given object are newline-delimited.

Limiting S3 Uploads by IP

S3 provides a mechanism to limit operations only from specific IP addresses. If you'd like to apply that filter, uploads using $.send.s3() should come from one of the following IP addresses:

3.208.254.1053.212.246.1733.223.179.1313.227.157.1893.232.105.553.234.187.12618.235.13.18234.225.84.3152.2.233.852.23.40.20852.202.86.952.207.145.19054.86.100.5054.88.18.8154.161.28.250107.22.76.172

This list may change over time. If you've previously whitelisted these IP addresses and are having trouble uploading S3 objects, please check to ensure this list matches your firewall rules.