This way both, DynamoDB and Kinesis streams can be used with the help of the
"stream" event rather than two different event types ("dynamodb" and "kinesis").
The stack is now set up in one place.
The S3 bucket isn't created on create if a bucket is already specified.
Working on configurability for the developer.
There is still the possibility of failing to deploy the stack
if the iamRoleArn is set, and a deploymentBucket is specified
(scoped to AWS)
Previously you had a number of options, including legacy options for loading credentials. Given the 0.x=>1.x change, we can drop a lot of the old approaches. This PR attempts to bring all the good things.
The options for loading credentials are as follows:
1. define credentials on serverless.yml=>service.provider.credentials = { accessKeyId: 'accessKeyId', secretAccessKey: 'secretAccessKey', sessionToken: 'sessionToken' }
2. define a profile from which to get credentials on serverless.yml=>service.provider.profile = 'profile-name' (all profiles loaded using AWS.SharedIniFileCredentials, see http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SharedIniFileCredentials.html)
3. define credentials for all stages using the standard AWS environment variables (see http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EnvironmentCredentials.html)
4. define a profile for all stages using the environment variable AWS_PROFILE
5. define credentials for each stage using the standard AWS environment variables with the STAGE name inserted (e.g. stage='test', envVarName='AWS_TEST_*')
6. define a profile for each stage using an environment variable `AWS_${stageName.toUpperCase()}_PROFILE`
If credentials/profiles are declared in multiple ways, the later cases will override the former.
These use cases previously covered all user requirements but the current implemenation allows for an expansion of mechanisms if more mechanisms are desirable.