scp-in remote rails server

Yesterday I was trying to scp some of the logs from Rails EC2 Server, for my colleague. For a long time the scp dint work exiting with status 1. It also gave only a cryptic message as output saying “Using System Ruby Now”.

I couldn’t figure what the problem might be, expecting something to be messing from the client side{My local machine}. verbose scp output


...
debug1: Sending command: scp -v -f -- /****/shared/log/logfile.log
Sink: Now using system ruby.
Now using system ruby.
➜ staging-de1 Sending file modes: C0664 1583336 logfile.log
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
...

gave some hints that the remote system was sending to “Sink”, my machine instead of the file, output “Now using system ruby”

Evidently, while we checked the server, these existed a “rvm use” call in the bashrc file, that lead to echo the above output on every login.

Strangely enough, I thought scp should have handled this by purging the terminal output, but it didn’t.

So later the scp was successful, after the echo was removed.

Hope, scp in future introduces feature that purges this kind of output from shell, to honor only the file contents.

After a long time

This is a long pending post for so many things that have happened over a long time. I couldn’t post due to  so much of frenzy going around.

To start off, I conducted around 3 workshops and talks,

First one was a workshop for the Faculty of University of Pune, which was conducted at PVG College, Pune. It was aimed at giving a brief idea of how OpenStack works and Eucalyptus along with it.

I then went on to demo Cloud setup over Openstack.

Slides @ http://vipul.byclor.org/slides/presentation.html

Second was a workshop on Android for a state level student workshop which was conducted for around 150 students from 28 colleges of Maharashtra. It was held at ADCET College Ashta {http://www.adcet.org.in/}.

This one was special for me, as I received a splendid response from the participants. It was awesome!

You can find demoes from the workshop over at https://github.com/vipulnsward/AndroidWorkshop

Then came a brief Lightening talk over Git and Ruby and Cloud, at the RubyConfIndia 2012.{rubyconfindia.org}

Slides @https://github.com/vipulnsward/RubyLighteningTalk

I then happen to have conducted a workshop on Blender at SKNCoE {skncoe.edu.in} which was one whole day thing, and then the same at PICT{My College| pict.edu}.

Well thats all about for now. There have been many exiting projects I have been working over, which I will be posting soon. But you can find many @github.com/vipulnsward

UEC

Yesterday, we(Shyam Sir, Anand-his brother an me), completed the setup of a test cloud in PICT.

We setup a cloud with procedure from below

https://help.ubuntu.com/community/UEC”

It was a havoc for many days, Shyam Sir had started the setup many days before. Its easy to setup the cloud, if its on an internal network. We had many collisions as we wanted to use two NIC’s one for Public IP and another a private one that created a bridge to the internal nodes.

We setup Eucalyptus EC,which provides Infrastructure as a Service (IaaS), which consists of the following main components

1. Cloud Controller (Your main interface for the external world)
2. Cluster Controller (CLC-Controls the internal cluster/s)
3. Walrus (Controls volumes and storage)
4. SC (Storage Controller)

Every node within the network should have a Node Controller(NC)

Important things to remember:
1.Make sure you have assigned proper IP’s for internal and external network
2. Make sure there is password less login from CC to NC
3. ssh is a hell,it wasted our days, make sure ssh ports are properly configured, this is bound to happen due to use of 2 NIC’s
4. Make sure you have set ListenAddress property,to mask and use the desired interface for ssh,(/etc/ssh/ssh_config)
5. See that all internal interfaces act as a bridge, along with ip-forwarding enabled.

*For detailed error,check the cc.log in /var/log/eucalyptus

*Only after proper registration of nodes, do we find a nc.log on the node machine, this is an indication of relief(its almost done), as your CC and NC are communicating.

—More to come about images installation and management—