Difference between revisions of "Novena Cloud Image"
(Created page with " The pre-built cloud image comes with the following pre-built and installed: * OE directory with packages and image ready-to-go * local toolchain built and installed * u-boot im...") |
(No difference)
|
Revision as of 12:53, 27 February 2013
The pre-built cloud image comes with the following pre-built and installed:
- OE directory with packages and image ready-to-go
- local toolchain built and installed
- u-boot image configured and built
- kernel image configured and built (note: config & devtree items are symlinked to git repo, beware!):
- inside ~/linux-next/:
lrwxrwxrwx 1 ubuntu ubuntu 79 Feb 17 09:52 .config -> /home/ubuntu/oe/sources/meta-kosagi/recipes-kernel/linux/linux-novena/defconfig
- inside ~/linux-next/arch/arm/boot/dts:
lrwxrwxrwx 1 ubuntu ubuntu 80 Feb 17 09:54 imx6q.dtsi -> /home/ubuntu/oe/sources/meta-kosagi/recipes-kernel/linux/linux-novena/imx6q.dtsi lrwxrwxrwx 1 ubuntu ubuntu 80 Feb 17 09:54 novena.dts -> /home/ubuntu/oe/sources/meta-kosagi/recipes-kernel/linux/linux-novena/novena.dts
- when building the kernel, don't forget to specify uImage and LOADADDR (see above)
Accessing the cloud image
Access to the cloud image is granted by emailing bunnie. He will create for you a username and password. You will need two of them, one for VPN access (to get to your machine via ssh), and one for the cloud management interface.
If you don't want to deal with managing a cloud instance, bunnie can also start an instance for you and load it with your ssh public key. In which case, the only thing you need to give him is a username/password combo for a VPN. The VPN is implemented using PPTP on DD-WRT. Yes, it's insecure, but the router currently installed can't run OpenVPN. Perhaps future equipment upgrades will change this, but for now the only login method allowed to internal machines in the VPN is ssh with no passwords (pubkey-only).
Administrative notes
To copy a private cloud root instance to EC2
To duplicate an instance on the kosagi private cloud into EC2:
- you must have admin privileges on hexapod.
- you must have an EC2 certificate with matching private key, and your ID and shared secret. These are accessed by going to your EC2 console, and clicking your username in the upper right hand corner, and selecting "Security Credentials". Everything except the private key can be downloaded. If you lack a private key for your certificate, you can create a new certificate on the spot and download a new private key. Note that if you already have two certificates, you can't create another one, so you have to permanently delete/revoke one of the certs to make a new one.
- You need to create an S3 bucket to hold the image as it is uploaded. Remember the availability zone and bucket name.
First create a snapshot on the local cloud: Go to "instances", and under "Actions" click "create snapshot".
Then, under images & snapshots, find the snapshot you made and click on the image name. There will be an ID code. Take note of the code.
On the private cloud server (hexapod), change the access permission on /mnt/openstack/glance/<image ID> to a+r. Otherwise, the next commands will fail with obscure error messages that don't really inform you of that problem.
On hexapod, send the volume into EC2:
ec2-import-volume -o <Access key ID> -w <Secret Access Key> --cert cert-XXXXXXXXXXXXXXXXXXXXXXXXXXX.pem --private-key pk-XXXXXXXXXXXXXXXXXXXXXXXXXXX.pem -f RAW -b <bucket-name> --region <region> --description "insert descriptive name here" -z <availability zone> /mnt/openstack/glance/<image ID>
If this command succeeds, don't forget to change the permission on the image to o-r. If it fails, use the --debug and -v flags. The most likely problems you'll have are with the credentials. If you're having troubles try using simpler ec2 commands like ec2-describe-availability-zones; this will require your X.509 certificate and you can at least test that out. I found that making your own cert doesn't always work (it sometimes does, and when it fails, the error messages are not helpful), you should really use the ones created by the amazon interface.
Now, log into your EC2 dashboard, and check in the availability zone that you've uploaded to, in the "Elastic Block Store->Volumes" section that a new volume has been created. If the volume is there, go to S3 and empty the bucket (otherwise you'll just end up paying for the storage of that bucket for no good reason).
Select the volume you've just uploaded, right-click, and "create snapshot"
Go to "Elastic Block Store->Snapshots"
Select the new snapshot, right-click, and "Create image from snapshot"
This will start a wizard that asks you for a kernel ID. The UI for this sucks. Basically, go to the canonical website and pull up the list of ubuntu kernel IDs for your region, and find the one that should be on the list. It's a long pull-down list. For ap-southeast-1a, I used aki-fe1354ac for a 12.04LTS image.
Once this is done, you'll have a new AMI. Go to "Images->AMIs". Right-click your new image, and select "Launch Instance". Configure the keypair, security group, etc.
Finally, you should be able to go to "Instances->Instances" and see your new instance running.
debugging
- credential issues: try running a "simple" command like ec2-describe-availability-zones to work out X.509 issues, this is the biggest headache. Worst case, make a new X.509 cert from the amazon UI; you can have up to two certs, if you have two already nuke one and you'll get a link to make a new one.
- if your imports fail, you *must* cancel them. Use ec2-describe-conversion-tasks (don't forget to specify --region, otherwise it just defaults to us-east) to list the tasks, and then use ec2-cancel-conversion-task to remove it from the queue. Otherwise, you'll get this error eventually: "Client.ResourceLimitExceeded: Task failed to initialize - conversion task limit (5) exceeded."
- --debug and -v are your friends
- check permissions on volumes you're trying to upload. The ec2 tools don't give helpful error reports when you can't read the file -- it makes it look like it was a network connection error, not a file access error.