openstack

Happy Software Freedom Day! Find Out How NASA Has Contributed To Open Source Software:

“Back in 2008 and 2009, people were still trying to figure out what ‘cloud’ meant. While lots of people were calling themselves ‘cloud enabled’ or ‘cloud ready,’ there were few real commercial offerings. With so little clarity on the issue, there was an opportunity for us to help fill that vacuum.” - Raymond O’Brien, +NASA Ames Research Center 

Needing a way to standardize web space, a team of researchers at NASA Ames began a 2008 project known as NASA.net. The project offered a way to consolidate web development tools and data resources which heightened efficiency between all facets of the space agency. William Eshagh, another key contributor from NASA.net’s early days, aimed to find a way for web developers to upload code to a platform that was universally utilized.

“The basic idea was that the web developer would write their code and upload it to the website, and the website would take care of everything else,” according to Eshagh.

Still requiring an “infrastructure service” to manage the large quantities of data that NASA accumulates on a daily basis, the scope of the Ames project switched gears, and NASA.net was reorganized as Nebula. Rather than simply setting standards and providing a platform for web developers, the Nebula team would construct an open source compute controller. Early on, the collaborative nature of Nebula benefited development — as anyone with the understanding of the technology and desire could access the code and provide improvements. Raymond O’Brien, who remained on the Nebula team, reiterated the appeal of Nebula’s open source identity.

“From the beginning, we wanted this project to involve a very large community—private enterprises, academic institutions, research labs—that would take Nebula and bring it to the next level. It was a dream, a vision. It was that way from the start,” said O’Brien.

An early obstacle the pure open sourced project had to overcome was a piece of software known as the cloud controller, a pivotal segment of the project if the end users are to access computers or data. At this time, the existing tools were either written in the incorrect programming language or were closed source — not usable due to licensing limitations. However, It only took the Nebula team a matter of days to build a new cloud controller from scratch, and immediately began to attract interest from Rackspace Inc.

“We believed we were addressing a general problem that would have broad interest,” stated Eshagh. “As it turns out, that prediction couldn’t have been more accurate.”

Rackspace, known for providing open source storage, was set to begin construction of a similar cloud controller to what Nebula just released. Given the technical similarities between the two teams, Rackspace and Nebula began a partnership known as OpenStack — and a community of developers around the world would contribute towards the construction of what would become one of the most successful open source cloud operating systems.

The future of OpenStack, and other open source projects, are bright due the early efforts of the NASA.net team at Ames. Due to the initial devotion to keeping the project open source in those early days, a large majority of contributions to the OpenStack code came from community efforts outside of NASA. Today, on Software Freedom Day, be sure to checkout the following resources related to the OpenStack cloud, a NASA Spinoff.

Sources:
1. Web Solutions Inspire Cloud Computing Software
http://spinoff.nasa.gov/Spinoff2012/it_2.html
2. Nebula, NASA, and OpenStack
http://open.nasa.gov/blog/2012/06/04/nebula-nasa-and-openstack/
3. Software Freedom Day
http://softwarefreedomday.org/

Pythonista なら OpenStack プロジェクトが使っているパッケージは要チェック
プログラミングでベストプラクティスを見つけるには、他人のソースコードを読むのが手っ取り早いと思う。 特に、活動が活発な OSS は素晴らしい勉強材料になる。 その中でも OpenStack は世界中の Pythonista が寄ってたかって弄っているプロジェクトなので、なかなか良い感じ。 コンポーネント毎にちぐはぐ感はあっても間違いは少ないと思う。 そこで使われているパッケージを眺めているだけでも、結構な発見があるんじゃないかな。

とりあえず下準備としてディレクトリを一段掘ってソースコードをチェックアウトしておく。 OpenStack プロジェクトは本当にコンポーネントが多いので主要なものだけに絞った。
$ mkdir openstack
$ git clone https://github.com/openstack/nova.git openstack/nova
$ git clone https://github.com/openstack/neutron.git openstack/neutron
$ git clone https://github.com/openstack/horizon.git openstack/horizon
$ git clone https://github.com/openstack/keystone.git openstack/keystone
$ git clone https://github.com/openstack/glance.git openstack/glance
$ git clone https://github.com/openstack/cinder.git openstack/cinder
$ git clone https://github.com/openstack/swift.git openstack/swift

コンポーネントの動作に必要なパッケージは各プロジェクト直下にある requirements.txt に記述されている。 動作以外でテストに必要なものは test-requirements.txt にある。 ちなみに、これらはパッケージ管理システム PIP に読み取らせるためのフォーマットで書かれている。 pip install -r <requirements-file> みたいにして使えるわけ。

各コンポーネント毎に眺めるならこんな感じ。 ファイルにはパッケージのインストールに必要なバージョン情報とかコメントが入ってて邪魔なので取り除く。
$ cat openstack/nova/requirements.txt | cut -d ">" -f 1 | cut -d "=" -f 1 | cut -d "<" -f 1| sed -e "/^#/d" -e "/^$/d"
$ cat openstack/nova/test-requirements.txt | cut -d ">" -f 1 | cut -d "=" -f 1 | cut -d "<" -f 1| sed -e "/^#/d" -e "/^$/d"

個別に見ていっても良いけど、使っているパッケージを一覧したいのでシェルスクリプトを用意してみる。
$ cat openstack-unique-libs.sh
#!/bin/sh

LIBS=""
for i in `ls openstack`
do
  LIBS+=`cat openstack/$i/requirements.txt`
  LIBS+=`cat openstack/$i/test-requirements.txt`
done
echo "$LIBS" | cut -d ">" -f 1 | cut -d "=" -f 1 | cut -d "<" -f 1| sed -e "/^#/d" -e "/^$/d" | sort | uniq

実行してみる。
$ sh openstack-unique-libs.sh
-f http://pysendfile.googlecode.com/files/pysendfile-2.0.0.tar.gz
Babel
Django
Jinja2
MySQL-python
Paste
PasteDeploy
Routes
SQLAlchemy
WebOb
WebTest
alembic
amqplib
anyjson
argparse
boto
cliff
configobj
coverage
discover
django-nose
django_compressor
django_openstack_auth
docutils
dogpile.cache
eventlet
feedparser
fixtures
greenlet
hacking
hp3parclient
hplefthandclient
httplib2
iso8601
jsonrpclib
jsonschema
keyring
kombu
lesscpy
lockfile
lxml
mock
mox
netaddr
netifaces
nose
nose-exclude
nosehtmloutput
nosexcover
oauthlib
openstack.nose_plugin
ordereddict
oslo.config
oslo.messaging
oslo.rootwrap
oslo.sphinx
oslo.vmware
oslosphinx
oslosphinx# Horizon Core Requirements
oslosphinxpbr
oslotest
paramiko
passlib
pastedeploy
pbr
psutil
psycopg2
pyOpenSSL
pyasn1
pycadf
pycrypto
pylint
pymongo
pysendfile
pysqlite
python-ceilometerclient
python-cinderclient
python-glanceclient
python-heatclient
python-keystoneclient
python-ldap
python-memcached
python-neutronclient
python-novaclient
python-saharaclient
python-subunit
python-swiftclient
python-troveclient
pytz
qpid-python
requests
rtslib-fb
selenium
simplejson
six
sphinx
sqlalchemy-migrate
stevedore
suds
taskflow
testrepository
testscenarios
testtools
websockify
wsgiref
xattr

補足しておくと oslo.* は OpenStack プロジェクトが内製しているパッケージ。 あとは OpenStack のコンポーネント名が付くものも基本そうかな。

こういった活動が活発な OSS を幾つか知っておくと、良さげなライブラリへの感度は高まると思う。 ちなみに前述した「ちぐはぐ感」というのは、例えばあるコンポーネントは DB マイグレーションに alembic を使っているのに、別のコンポーネントでは sqlalchemy-migrate を使っている、とかそういうの。 上のリストを見ると他にはテストダブル用の mock と mox とかも、同様に被ってるよね。 ちなみに二つから選ぶなら、ぼくのオススメは alembic と mock かな。
클라우드 컴퓨팅, OpenStack 메모

OpenStack 커뮤니티가 private 클라우드의 성장을 방해하고 있다는 Matt Asay의 2015년 1월 20일 글

OpenStack 이론 vs. 현실

  • 만약 이론대로 현실에서도 적용되면 참 좋겠지만.
  • 현실에서의 OpenStack 은 여전히 엄청나게 복잡한 것으로 남아있다.
  • Cloudscaling의 Randy Bias 만큼 OpenStack 에 대해 아는 사람은 없을 것이다.
    • 그는 OpenStack 의 초기 개척자 중 한명이고, OpenStack Foundation 설립 멤버였다.
    • Bias가 “OpenStack 은 자기 자신의 무게를 못이겨 스스로 무너질 위험이 있다” 라고 경고하는 것 [1] 에 대해 OpenStack을 사용하는 사람들은 귀를 기울여야 한다.
    • Bias의 최근 글을 요약하면 [1]
    • OpenStack 은 커뮤니티 안에서 다양한 이해관계를 조화시키는 방법을 찾고 있지만, 정말로 필요한 것은 “제품 관리와 제품 전략 리더십” 이다.
  • 이러한 리더십이 없으면, 기업에서 OpenStack을 도입하려고 했던 David (Packet 의 인프라 조직장) 처럼, 기업이 클라우드에 기대하고 있는 편리함을 커뮤니티가 제공하기는 힘들다는 것을 배우게 될 것이다. [2]

한 달이 지나서 내가 읽은 대량의 문서들이 날짜가 지난 정보이거나 부정확하다는 것을 알게 되었다. 그래서 문서, 위키, irc, 커밋 메시지를 선별하면서 진실을 찾아야 했다. 기본이 된 이후에는, 많은 시간을 Python 디버깅에 썼다. “X는 동작하는가?” 를 증명하기 위해 오랜 시간 동안 디버깅을 해야 했고, 이것은 정말 느린 작업이었다. [2] - 우리가 OpenStack을 도입하기 위해 4개월 동안 한 작업을 쓰레기통에 던진 이유

VMware 의 최근 움직임

Red Hat Releases Enterprise Linux OpenStack Platform 6

Red Hat Releases Enterprise Linux OpenStack Platform 6

Red Hat has released its Enterprise Linux OpenStack Platform 6. The platform is based on the OpenStack Juno release, and new features include IPv6 support,  Neutron High Availability, Single root I/O virtualization (SR-IOV) networking, Support for Multi-LDAP backends, support for data processing, deeper Ceph integration, among other things.

With the release of the Enterprise Linux OpenStack…

View On WordPress

Clouds, open source, and new network models: Part 3

by James Urquhart  October 28, 2011

In part 1 of this series, I described what is becoming an increasingly ubiquitous model for cloud computing networks, namely the use of simple abstractions delivered by network systems of varying sophistication. In part 2, I then described OpenStack’s Quantum network service stack and how it reflected that model. Software defined networking (SDN) is an increasingly popular—but extremely nascent—model for network control, based on the idea that network traffic flow can be made programmable at scale, thus enabling new dynamic models for traffic management. Because it can create “virtual” networks in the form of custom traffic flows, it can be confusing to see how SDN and cloud network abstractions like Quantum’s are related.

http://news.cnet.com/8301-19413_3-20126245-240/clouds-open-source-and-new-network-models-part-3/

youtube

Scobleizer interviews Gluster CEO outside Facebook’s Palo Alto DC.

Docker launches container orchestration toolkit with Hadoop and OpenStack support

Docker Inc. is finally making its long-anticipated management technology  available for download after three months of open-air incubation on GitHub. The launch marks another major step forward in the evolution of containers towards enterprise-readiness.

One of the biggest benefits that the lightweight virtualization model offers over the conventional hypervisors dominating the enterprise today is the ability to easily shuffle code and data across different types of infrastructure in standardized packages. But that interoperability only exists in theory for organizations with mission-critical applications.

In practice, the average business process takes a great deal of scaffolding to support that can’t simply migrate with the workload to another destination as part of the container. That logistical constraint also poses a lesser but no less significant challenge for developers in moving an application through the different stages of the project lifecycle.

The orchestration toolkit promises to close the loop on that manageability gap. It provides a unified way to handle a container from the time it’s created on a developer’s laptop to the final production rollout and every pit stop in between. The launch version offers several improvements over the original release from December that significantly broadens the range of supported use cases.

Docker Machine, the command line utility that handles bulk deployment and updating of containers across hosts, can now run on a dozen different platforms. The list includes OpenStack and the major public clouds as well as more surprising items like VMware’s OS X desktop hypervisor, which is a response to increasingly vocal demand for more operating system options than just Linux.

The same pressure had previously led the Docker to team up with Microsoft to bring its namesake technology to Windows, an effort that has already produced a native command line client. But although the journey of a container may start on the developer’s laptop, the startup’s ambition hardly end there.

That is evident in the update to the Docker Swarm, the clustering component, which has been extended to support  three other schedulers besides Mesos including the hugely popular Kubernetes from Google and Amazon’s homegrown alternative. The enhancement provides that much more flexibility when it comes to scaling container clusters in the cloud.

Joining the new options is compatibility with the ZooKeeper coordination service for Hadoop. The data crunching platform was quietly updated to run on containers in November, a major opening that the update is meant to seize. It also targets other distributed applications with the addition of support for Consul, a similar technology that offers the same kind of capabilities for other distributed applications.

Another reason that the latter addition stands out is the fact that the framework is part of another recently introduced orchestration suite that directly competes with Docker on many areas, which reinforces the startup’s much-touted policy of openness. But as important as it is, freedom of choice alone won’t bring containers into the enterprise. Much more progress is needed on the orchestration front to turn the paradigm into a viable alternative for traditional virtualization.

vimeo

(via Piston Cloud Computing, Inc. | Easy. Secure. Open. » Watch: Arista Networks & Piston Cloud Webinar)

Internap Amplifies Cloud Play With $30 Million Voxel Acquisition

By Andrew R Hickey, CRN

January 03, 2012    10:20 AM ET

Cloud and IT infrastructure player Internap kicked off the new year Tuesday with the $30 million acquisition of Voxel Holdings, an enterprise cloud and hosting company that will fortify Internap’s push deeper into cloud computing. Adding Voxel, which was founded in 1999, to its cloud portfolio gives Internap a stronger foothold in the growing cloud market, an area Internap has already cut into with its OpenStack public cloud offering, an OpenStack storage offering, and its private cloud play. Based in New York, Voxel has locations that span North America, Amsterdam and Singapore and the company currently boasts 1,000 customers using its on-demand cloud and dedicated hosting services and its automated provisioning capabilities.

http://www.crn.com/news/cloud/232301171/internap-amplifies-cloud-play-with-30-million-voxel-acquisition.htm

Vagrant と RDO で OpenStack をサクッと試す
構築や運用の面倒くささに定評のある OpenStack だけど、色々な所が出しているディストリビューションを活用することで、それを少しは軽減できる。 RDO は RedHat が出している OpenStack ディストリビューションのコミュニティ版だ。 今回はその RDO を Vagrant と組み合わせて OpenStack を触れる環境をサクッと作ってみる。 尚 Vagrant のバックエンドにはデフォルトの VirtualBox を使う。

今回使う Vagrantfile は以下の通り。 仮想マシンの上で仮想マシンを動かす (Nested Virtualization) ので、メモリは 4GB くらい欲しいところ。 ホストには CentOS 6.5 を使う。
$ cat << EOS > Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "centos65"
  config.vm.network :private_network, ip: "192.168.33.10"
  config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "4096", "--cpus", "2"]
  end
end

EOS

上記の Vagrantfile を元に VM を立ち上げてログインする。
$ vagrant up
$ vagrant ssh

まずは RDO をインストールする。
$ sudo yum install -y http://rdo.fedorapeople.org/rdo-release.rpm

これで OpenStack の動作に必要な各種 RPM のあるリポジトリが登録される。
$ rpm -ql rdo-release
/etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Havana
/etc/pki/rpm-gpg/RPM-GPG-KEY-foreman
/etc/pki/rpm-gpg/RPM-GPG-KEY-puppetlabs
/etc/yum.repos.d/foreman.repo
/etc/yum.repos.d/puppetlabs.repo
/etc/yum.repos.d/rdo-release.repo
$ cat /etc/yum.repos.d/rdo-release.repo
[openstack-havana]
name=OpenStack Havana Repository
baseurl=http://repos.fedorapeople.org/repos/openstack/openstack-havana/epel-6/
enabled=1
skip_if_unavailable=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Havana
priority=98


次に Packstack をインストールする。 これは Puppet をベースにした OpenStack のインストーラだ。 上記のリポジトリから RPM をインストールしたり、その設定を行う。
$ sudo yum install -y openstack-packstack

Packstack を使って OpenStack をインストールする。 これにはかなり長い時間がかかるので気長に待つ。 今回の構成は、全てのコンポーネントを一つのホストにインストールする allinone にした。 同時に VirtualBox が NAT するプレフィックス (10.0.2.0/24) から Floating IP のレンジを切り出す。
$ packstack --allinone --provision-demo-floatrange=10.0.2.128/25

インストールが終わったら幾つか追加の設定を行う。 まずは、OpenStack の Web 画面 (コンポーネント名: Horizon) が、特定のアドレス (10.0.2.15) からのアクセスしか受け付けないようになっているので、その制限を外す。 このアドレスは VirtualBox が NAT しているものなので、直接アクセスすることはできないためだ。
$ sudo sed -i -e "s:^ALLOWED_HOSTS = .*$:ALLOWED_HOSTS = \['\*'\, ]:" /usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py
$ sudo service httpd restart

次にコンピューティング (コンポーネント名: Nova) 周りの設定を行う。 最初の設定は、ハイパーバイザのデフォルト設定が準仮想化の KVM になっているので、それを完全仮想化の qemu に変更している。 VirtualBox には VT パススルーの機能がないので KVM は動かないためだ。 二番目の設定は仮想マシンのキーボードを日本語に変更している。 三番目は仮想マシンの Web コンソールにアクセスするアドレスを到達性のあるものにしている。
$ sudo sed -i -e "s:^libvirt_type=.*$:libvirt_type=qemu:" /etc/nova/nova.conf
$ sudo sed -i -e "s:^#vnc_keymap=.*$:vnc_keymap=ja:" /etc/nova/nova.conf
$ sudo sed -i -e "s/10.0.2.15:6080/192.168.33.10:6080/" /etc/nova/nova.conf
$ sudo service openstack-nova-compute restart

さて、以下の URL にアクセスすれば OpenStack の Web 画面が閲覧できる。
http://192.168.33.10/dashboard

ログインに必要なアカウントは root ユーザのホームディレクトリにある。 デフォルトでは、一般ユーザ権限の demo と管理者権限の admin の二種類が用意されている。 以下の例では admin ユーザはパスワード 294332ec012e498c で、demo ユーザはパスワード 23927beaf78642e1 でログインできる。 デフォルトでパスワードは RDO が毎回ランダムに生成する。
$ sudo cat /root/keystonerc_demo
export OS_USERNAME=demo
export OS_TENANT_NAME=demo
export OS_PASSWORD=294332ec012e498c
export OS_AUTH_URL=http://10.0.2.15:35357/v2.0/
export PS1='[\u@\h \W(keystone_demo)]\$ '
$ sudo cat /root/keystonerc_admin
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=23927beaf78642e1
export OS_AUTH_URL=http://10.0.2.15:35357/v2.0/
export PS1='[\u@\h \W(keystone_admin)]\$ '

Web 画面から操作するのも良いけど、せっかくなので CUI のクライアントから VM インスタンスを作成してみよう。 まずは demo ユーザ用の環境変数を読み込む。
$ sudo -i
# source keystonerc_demo

まず VM のスペック (Flavor) を確認する。 一番小さいもの (m1.tiny) で十分だろう。
# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

次に接続するネットワークを確認する。 Neutron は OpenStack のネットワーク周りを管理するコンポーネントだ。 デフォルトで二つのネットワーク public と private ができている。 VM は private に接続する。
# neutron net-list
+--------------------------------------+---------+--------------------------------------------------+
| id                                   | name    | subnets                                          |
+--------------------------------------+---------+--------------------------------------------------+
| 3e33219e-4eaa-4246-82d9-3df0ad87fa80 | public  | c44ebd07-a74d-4f5c-88a1-8aabb68bcbc7             |
| 41002405-2da9-45c3-b2e0-7dd54a319815 | private | 9e44fe14-fed1-49ef-87f1-8ba9a05ccbb8 10.0.0.0/24 |
+--------------------------------------+---------+--------------------------------------------------+

VM の起動に使うイメージを確認する。 Glance は OpenStack のイメージファイルを管理するコンポーネントだ。 デフォルトで cirros という Linux ディストリビューションのイメージが登録されている。
# glance image-list
+--------------------------------------+--------+-------------+------------------+----------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size     | Status |
+--------------------------------------+--------+-------------+------------------+----------+--------+
| ac256315-7aaa-4e9a-86a9-8c0c3a0bb5aa | cirros | qcow2       | bare             | 13147648 | active |
+--------------------------------------+--------+-------------+------------------+----------+--------+

ひと通り確認できたら VM インスタンスを起動する。
# nova boot --flavor 1 --image ac256315-7aaa-4e9a-86a9-8c0c3a0bb5aa --nic net-id=41002405-2da9-45c3-b2e0-7dd54a319815 vm1
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          | nova                                          |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | -                                             |
| OS-SRV-USG:terminated_at             | -                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | AoSwqAFtGmK4                                  |
| config_drive                         |                                               |
| created                              | 2014-03-24T14:45:25Z                          |
| flavor                               | m1.tiny (1)                                   |
| hostId                               |                                               |
| id                                   | 9e829685-8bd9-4733-82b3-4f00d23c31dd          |
| image                                | cirros (ac256315-7aaa-4e9a-86a9-8c0c3a0bb5aa) |
| key_name                             | -                                             |
| metadata                             | {}                                            |
| name                                 | vm1                                           |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | 70791dac7fca40f1a432a1831dfdd58e              |
| updated                              | 2014-03-24T14:45:26Z                          |
| user_id                              | 610aad0d156841c3b9f497bb5496eed7              |
+--------------------------------------+-----------------------------------------------+

上手くいけば、これで VM インスタンスが追加されているはずだ。

ダッシュボードにログインする。



インスタンスタブに移動すると VM インスタンスが確認できるはず。



Web コンソールの画面に移動する。



VM インスタンスには cirros / cubswin:) でログインできる。



めでたしめでたし。
Two excellent #OpenStack meetups - join us! @openstackil @igtcloud

We have two great upcoming meetups you won’t want to miss:

Online Meetup: Application Orchestration on VMware Integrated OpenStack w/ TOSCA  - Tomorrow, February 24th at 7:00 PM IL Time with VMware and GigaSpaces in conjunction with the Cloud Online Meetup

and 

Deploying Cinder in Production with Avishay Traeger, Stratoscale - Tuesday March 3rd at 6:30 PM, hosted by Liveperson, 13 Zarchin Street, Raanana, Entrance B

AT&T and OpenStack

AT&T is going to build a developer cloud using the OpenStack cloud framework. Before I go any further I should provide a quick explanation to OpenStack. OpenStack is an opensource cloud software created by mainly Rackspace and NASA, currently it is at release version 1.1. AT&T is hoping to release a cloud product called cloud architect. 

Source: http://arstechnica.com/business/news/2012/01/att-joins-openstack-as-it-launches-cloud-for-developers.ars