2013. december 28., szombat

SuSE Linux Enterprise Server: SLES containers with LDAP (TLS) authentication.

SuSE Linux Enterprise Server: SLES containers with LDAP (TLS) authentication.

SLES is free to download and free to use. Sign up at suse.com, the trial code provides updates for 60 days.


I.) OS level vs HW level virtualization

What is LXC?
LinuX Container provides OS level virtualization. It's similar to Linux OpenVZ, FreeBSD Jail, Solaris Zone, AIX WPAR.

What are the differences between HW level and OS level virtualization?
In a nutshell:
- HW level virtualization emulates computer hardware (a guest virtual machine), the guest OS runs on this emulated hardware, and thinks it's running on a real machine.
- OS level virtualization doesn't use emulated hardware, but uses a special environment with own process management, network space etc. For example, the guest's file systems are chrooted directories on the host OS.
For full specifications you can search the web.

What are LXC's advantages, why to use it?
Extremely fast to create new containers. A full SLES installation takes 1-2 minutes without user actions. If you create more containers the next ones will be created from RAM cache in 10-20 seconds.
Extremely fast to stop/start/reboot containers. The boot up time is 3-6 seconds (including system daemons, such as sshd etc.), the shutdown process takes 1-2 seconds.
Dynamic resource usage, especially the RAM management is amazing.
You can access/modify/backup your guest's filesystems directly, as these FSs are directories on the host OS. Quicker troubleshooting.

Is LXC secure?
It's more secure than it was, but not so secure than a hypervisor provided environment. If you need very strong security then use hypervisor instead of LXC.

What can I use as client OS?
Various Linux distributions.

Which one should I use?
HW virt.: If you need 100% isolated guests, mixed guest operating systems (Linux, Windows, BSD) on the same hypervisor, improved security. For example: a company uses Windows AD, Oracle databases on Oracle Linux instances, and SLESs with middleware products.
OS virt.: If you need resource/cost- effective (cheaper) environment, Linux guests only, homogeneous environment. For example: web hosting with additional services, or a middleware farm with development, test and production environments.

Both have their own advantages/disadvantages, choose the one that suits your needs.


II.) Install LXC on SuSE Linux Enterprise Server

My base system is a 64 bit SuSE Linux Enterprise Server 11SP3, it's a default installation with these packages:
Base System
AppArmor
32-bit Runtime Env.
Help and Support Doc.
Minimal System (Appliances)

This is a virtual machine. I've limited hardware resources (5GB RAM and 30GB free disk space) for my purposes, but I need 4 independent SLES to test a lot of middleware stuffs.
5GB RAM and 30GB disk for 4 virtual machines, it doesn't sound too good. So I created one VM with three LXCs, I assigned 5GB RAM and 30GB disk to this VM. The VM and the three LXCs are sharing the resources.
The other reason is that I've never used LXC, I wanted to try this interesting stuff.

This is my virtual machine:
Linux vm 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013 (70ed288) x86_64 x86_64 x86_64 GNU/Linux

The memory consumption of the VM and the three running LXCs. Four running SLES 11SP3 with system daemons:
             total       used       free     shared    buffers     cached
Mem:          4906        464       4441          0         21        281
-/+ buffers/cache:        194       4711
Swap:          127          0        127

Not bad. This default install scenario takes 1493 MBytes on the root filesystem.

Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-rootlv      4032  1493      2335  39% /

Install LXC, setup the network bridge

zypper install lxc yast2-lxc bridge-utils
yast -> Network Devices -> Network Settings
Add -> Device Type: Bridge -> Next
Statically assigned IP Address: 192.168.2.2, Subnet mask: 255.255.255.0 # The same IP address as eth0 currently has.
Bridged Devices: [x] eth0 - Ethernet Card 0 (configured), Continue, Quit.

vm:~ # ifconfig -a
br0       Link encap:Ethernet  HWaddr 52:54:00:5F:8D:F7
          inet addr:192.168.2.2  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:76 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4595 (4.4 Kb)  TX bytes:26814 (26.1 Kb)

eth0      Link encap:Ethernet  HWaddr 52:54:00:5F:8D:F7
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2074 errors:0 dropped:302 overruns:0 frame:0
          TX packets:1313 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1398189 (1.3 Mb)  TX bytes:406608 (397.0 Kb)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:100 (100.0 b)  TX bytes:100 (100.0 b)

And the active ssh connection hasn't terminated. :-)

Also I prepared /etc/hosts on vm:
127.0.0.1     localhost
192.168.2.2   vm
192.168.2.3   lxc1
192.168.2.4   lxc2
192.168.2.5   lxc3

yast -> Security and Users -> Firewall -> Interfaces -> br0, eth0: Internal Zone (intranet server, all ports are open)

The main lxc directory is /lxc here, the containers will be installed in /lxc/container_name1|2|3 etc. /lxc can be anything else,
the original (and default) /var/lib/lxc/ or /srv are good as well. I created LVM based filesystems for my containers.
mkdir /lxc && cd /lxc && mkdir lxc1 lxc2 lxc3

yast -> System -> Partitioner -> Volume Management

Now:
││Device          │     Size│F│Enc│Type     │FS Type│Label│Mount Point│Mount by│Used by│M│  
││/dev/vmvg       │ 29.88 GB│ │   │LVM2 vmvg│       │     │           │        │       │L│  
││/dev/vmvg/homelv│128.00 MB│ │   │LV       │Ext3   │     │/home      │Kernel  │       │ │  
││/dev/vmvg/rootlv│  4.00 GB│ │   │LV       │Ext3   │     │/          │Kernel  │       │ │  
││/dev/vmvg/swaplv│128.00 MB│ │   │LV       │Swap   │     │swap       │Kernel  │       │ │  
││/dev/vmvg/tmplv │  1.00 GB│ │   │LV       │Ext3   │     │/tmp       │Kernel  │       │ │

Add -> Logical Volume -> Name: lxc1lv -> Type: (x) Normal Volume -> (x) Custom Size, Size: 5GB, Stripes: Number 1
(x) Format partition, File System: ext3, (x) Mount partition, Mount Point: /lxc/lxc1, Finish

Create 2 more FSs in the same way, use lxc2 and lxc3. Later you can modify these FSs, explore LVM! ;-)

vm:~ # df -m /lxc/*
Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-lxc1lv      5040   139      4646   3% /lxc/lxc1
/dev/mapper/vmvg-lxc2lv      5040   139      4646   3% /lxc/lxc2
/dev/mapper/vmvg-lxc3lv      5040   139      4646   3% /lxc/lxc3

Whoops, /tmp is too big.
umount /tmp
Go back to LVM module in yast.
Select: │/dev/vmvg/tmplv │  1.00 GB│ │   │LV          │Ext3   │     │/tmp *        │Kernel  │   │
Press Enter on it, Resize, (x) Custom Size, size: 512MB, Next, Finish, Quit.

That's all, yast remounted it automatically, now it has the size I wanted:

vm:~ # df -m /tmp
Filesystem             1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-tmplv       504    33       446   7% /tmp


III.) Creating containers

Unfortunately there is no option to change the default path or destinations of the newly created containers if you create these containers via Yast's LXC module.
But no problem, the original template can be modified. Note: there are templates to install Debian, Ubuntu etc. on containers.

cd /usr/share/lxc/templates
cp lxc-sles lxc-sles.orig
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc1/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc1'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc1, Template: sles
IP address: 192.168.2.3, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234
# Choose strong password, 1234 is an example.
If you want to replace a container with a new one using the same (existing) name, then destroy it, create a new one and choose [Replace With New].

A tip, run yast -> Software -> Online Update before creating a SLES container, in this case the OS will be created with the newest packages.

Connect to your newly created SLES container:
lxc-console --name lxc1 # Use this if you cannot connect via SSH.
ssh root@lxc1 # Start with some housekeeping, create an admin user, set up sudo and disable root from SSH. See later.

You can create more containers in the same way:

cd /usr/share/lxc/templates
# We have a backup named lxc-sles.orig
cp lxc-sles.orig lxc-sles
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc2/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc2'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc2, Template: sles
IP address: 192.168.2.4, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234

cd /usr/share/lxc/templates
# We have a backup named lxc-sles.orig
cp lxc-sles.orig lxc-sles
cat lxc-sles | grep '$path'
sed -i 's/\$path/\/lxc\/lxc3/g' lxc-sles
cat lxc-sles | grep '/lxc/lxc3'

yast -> Miscellaneous -> Lxc -> Create
Name: lxc3, Template: sles
IP address: 192.168.2.5, Subnet: /24, Bridge: br0
Root Password: 1234, Repeat Password: 1234


IV.) The containers exist, first steps.

After I logged in to my lxc1 (as root ...yet):
lxc1:~ # df -m /
Filesystem              1M-blocks  Used Available Use% Mounted on
/dev/mapper/vmvg-lxc1lv      5040   926      3859  20% /

/dev/mapper/vmvg-lxc1lv is /lxc/lxc1 on the host system, and the root (/) FS inside the container.

As your host SLES is already registered, and the container's SLES is a clone, no need to register this one.
But the original installation media should be removed as a software source.
yast -> Software -> Software Repositories, select:
99 (Default)│   x   │           │SUSE-Linux-Enterprise-Server-11-SP3 11.3.3-1.138│cd:///?devices=/dev/sr0
then [ ] Enabled # Disable it.

Go to yast's "Software Management", at the "Search Phrase" type in sudo and press enter, it offers 2 packages:
sudo      │Execute some commands as root│1.7.6p2     │           │     1.2 MiB│
yast2-sudo│YaST2 - sudo configuration   │2.17.3      │           │   164.0 KiB│

If you will be dealing with apps that have X11 frontend, I suggest you to install xterm package, it's perfect to test whether X forwarding is ok or not.

Select them by the space key and press "Accept", also accept the dependencies.

Go to yast -> Security and Users -> User and Group Management -> Add
Add a user here, I chose lxc1op (and lxc2op, lxc3op for the other two).

There is a yast module for sudo, but use visudo this time.
Find this line: Defaults env_keep = "LANG LC_ADDRESS ...
And add DISPLAY to the start of the line, see:
Defaults env_keep = "DISPLAY LANG LC_ADDRESS
This needs if you want to run apps with X frontend through sudo (or after a sudo su -).

Find these lines:
Defaults targetpw   # ask for the password of the target user i.e. root
ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!
And disable them, see:
# Defaults targetpw   # ask for the password of the target user i.e. root
# ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!

As this is a test container, I've only one user here, so I don't need too difficult sudo config, but you can deal with groups etc. in case of need. Later I'll show you how.
Find these lines:
# User privilege specification
root ALL=(ALL) ALL

And add your admin user to under root, see:
# User privilege specification
root ALL=(ALL) ALL
lxc1op ALL=(ALL) ALL

Save and quit from visudo (like in vi editor, :wq)

Disable root from SSH:
cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
sed -i 's/#PermitRootLogin\ yes/PermitRootLogin\ no/g' /etc/ssh/sshd_config
service sshd restart



V.) Generate certificates for LDAP.

yast -> Security and Users -> CA Management

[Launch CA Management Module]

[ Create Root CA ]
CA name: vmcacert # It came from vm's CA certificate.
Common name: vm.lab (In most cases here is the FQDN of the server, for example: vm12.company.com)
E-Mail Addresses: ipl873@gmail.com
Organization: lab # In most cases here is the company name
Organizational Unit: sysops
Locality:
State:
Country: Hungary # Select your country.
[Next]
Type a pwd twice.
Key Length (bit): 2048 # This is the default, it's enough strong for test purposes. If you build real servers then choose 4096.
Valid Period (days): 3650
Advanced -> nsComment -> Delete the string: "YaST Generated CA Certificate"
Next, Create.

Now export it for future use.
Select vmcacert, [ Enter CA ], push on [Advanced...↓] -> Export to File -> (x) Certificate and the Key Unencrypted in PEM Format -> File Name: /root/vmcacert.pem, 3x OK

Select vmcacert, and press [ Enter CA ], go to Certificates tab, [Add↓], Add Server Certificate.
Common name: vm # In most cases here is the FQDN of the server, for example: vm12.company.com. My host's name is simply vm.
E-Mail Addresses: ipl873@gmail.com
Organization: lab # In most cases here is the company name
Organizational Unit: sysops
Locality:
State:
Country: Hungary # Select your country
[x] Use CA Password as Certificate Password
Key Length (bit): 2048
Valid Period (days): 3649
Advanced -> nsComment -> Delete the string: "YaST Generated Server Certificate"
Next, Create.

The cert has been created.

│Status│Common Name│E-Mail Address  │Organization│Organizational Unit│Locality│State│Country
│Valid │vm         │ipl873@gmail.com│lab         │sysops             │        │     │HU

Select it, then [Export↓] -> Export to File -> (x) Certificate and the Key Unencrypted in PEM Format
Certificate Password: Add vmcacert's pwd here. -> File Name: /root/vm.pem, 3x OK, Finish, Quit.

cd /root

cp vm.pem vm_pub.pem
cp vm.pem vm_priv.pem
rm vm.pem
vi vm_pub.pem
# Keep the first part of the content starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----
vi vm_priv.pem
# Keep the second part of the content starting with -----BEGIN RSA PRIVATE KEY----- and ending with -----END RSA PRIVATE KEY-----

cp vmcacert.pem vmcacert_pub.pem
cp vmcacert.pem vmcacert_priv.pem
rm vmcacert.pem
vi vmcacert_pub.pem
# Keep the first part of the content starting with -----BEGIN CERTIFICATE----- and ending with -----END CERTIFICATE-----
vi vmcacert_priv.pem
# Keep the second part of the content starting with -----BEGIN RSA PRIVATE KEY----- and ending with -----END RSA PRIVATE KEY-----

As I'm using this cert and its CA for LDAP, I put them into /etc/ssl/certs/openldap. You can choose another destination, or you can create more CAs and certs.
But don't forget to use the proper file permissions to protect the private keys.
mkdir /etc/ssl/certs/openldap
mv vm*.pem /etc/ssl/certs/openldap
chown -R ldap:ldap /etc/ssl/certs/openldap
chmod 500 /etc/ssl/certs/openldap
chmod 400 /etc/ssl/certs/openldap/*


VI.) Set up OpenLDAP.

On vm: yast -> Network Services -> LDAP Server
Enable to install openldap2 package.

Start LDAP Server (x) Yes
Firewall Settings [x] Open Port in Firewall
Server type (x) Stand-alone server

Basic Settings
  [x] Enable TLS
  [x] Enable LDAP over SSL (ldaps) interface

CA Certificate File (PEM Format): /etc/ssl/certs/openldap/vmcacert_pub.pem
Certificate File (PEM Format): /etc/ssl/certs/openldap/vm_pub.pem
Certificate Key File (PEM Format - Unencrypted): /etc/ssl/certs/openldap/vm_priv.pem

Next takes you to Basic Database Settings
Database Type: hdb
Base DN: dc=vmldap
Administrator DN: cn=Administrator [x] Append Base DN
LDAP Administrator Password: add a strong pwd here...
Validate Password: ...and again.
Database Directory: /var/lib/ldap
[x] Use this database as the default for OpenLDAP clients
Next.

Note: save this info, you'll need it later.
Start LDAP Server: Yes                                                                                                                             │
Register at SLP Service: No                                                                                                                        │
Database Suffix: dc=vmldap                                                                                                                         │
Administrator DN: cn=Administrator,dc=vmldap
Finish

Check OpenLDAP:
ps -ef | grep ldap
cat /var/log/messages | grep slapd
telnet localhost 389
telnet localhost 636
telnet vm 389
telnet vm 636

Check OpenLDAP with LDAP browser.
yast -> Network Services -> LDAP Browser
[Add] a new client config here,
Enter the name of the new LDAP connection: vmldap
LDAP Server: vm
Administrator DN: cn=Administrator,dc=vmldap
LDAP Server Password: Type your LDAP srv pwd here
[ ] LDAP TLS # You can set up TLS connection if you wish, of course. I wanted to try LDAP first without TLS.

Now you're able to browse LDAP database, it's almost empty ...yet.

Setup vm for LDAP auth.

yast -> Network Services -> LDAP Client
Accept the 4 packages need to be installed.

User Authentication (x) Use LDAP
Addresses of LDAP Servers: 127.0.0.1
LDAP Base DN: dc=vmldap
Secure Connection [ ] LDAP TLS/SSL # You can set up TLS connection if you wish, of course. I wanted to try LDAP first without TLS.
[x] Start Automounter
[x] Create Home Directory on Login

[Advanced Configuration...] -> Administration Settings
Configuration Base DN: ou=ldapconfig,dc=vmldap
Administrator DN: cn=Administrator,dc=vmldap
[x] Create Default Configuration Objects
[x] Home Directories on This Machine
[Configure User Management Settings...], 2x OK


VII.) Setup an LDAP user.

yast -> Security and Users -> User and Group Management

Select Groups, [Set Filter↓], LDAP Groups
[Add], Groupname: lxcops, don't modify anything, press OK.

Select Users, [Set Filter↓] LDAP Users
[Add], First Name: Johnny, Last Name: Bravo, Username: jbravo, give the pwd twice for this user, and select "Details".
Home Directory Permission Mode: 750
At "LDAP Groups", [x] lxcops # Johnny will be an operator of the SLES containers.
2x OK.

cat /etc/passwd | grep jbravo # jbravo does not exist as a standard un*x user.
id jbravo # But it exists as an LDAP user.
uid=1003(jbravo) gid=100(users) groups=1000(lxcops),100(users)
ls -ld /home/jbravo # jbravo's home exists.
drwxr-x--- 5 jbravo users 1024 Dec 26 17:31 /home/jbravo

Check this user in yast's LDAP Browser, at "LDAP Connections" press the down key, select "vmldap" and press enter.
Provide the LDAP admin pwd and press OK, then open "dc=vmldap", you'll see this:
│─┬─dc=vmldap
│ ├──ou=group
│ ├──ou=ldapconfig
│ ├┬─ou=people
│ │└──uid=jbravo



Now it's clear LDAP authentication is working as expected. Not necessary to setup LDAP Client on your host machine (vm in my case).
You can manage LDAP users/groups from any OS if you set up "LDAP Client" and "Administration Settings" (see above how).
I chose another scenario, here I'm managing my users only from vm. The clients (3 LXCs) can authenticate to vm (through TLS), but that's all,
no LDAP management on my LXCs.


VIII.) Setup LDAP authentication on lxc1.

Share the CA and the server cert you've already created on vm.

cd /etc/ssl/certs/openldap/ # Copy the certs from vm to lxc1.
scp *pub.pem lxc1op@lxc1:/tmp

Log in to lxc1 as lxc1op, edit /etc/hosts: # Or setup your nameserver to provide the addresses to domains, I don't need this as I have this small environment only.
127.0.0.1     localhost
192.168.2.2   vm
192.168.2.3   lxc1
192.168.2.4   lxc2
192.168.2.5   lxc3

cd /tmp
mkdir /etc/ssl/certs/openldap
mv *pub.pem /etc/ssl/certs/openldap
chmod 755 /etc/ssl/certs/openldap
chmod 644 /etc/ssl/certs/openldap/*

yast -> Network Services -> LDAP Client
Accept the packages need to be installed.

User Authentication (x) Use LDAP
Addresses of LDAP Servers: vm
LDAP Base DN: dc=vmldap
Secure Connection [x] LDAP TLS/SSL
[x] Start Automounter
[x] Create Home Directory on Login

Press [Advanced Configuration...]
Certificate Directory: /etc/ssl/certs/openldap
CA Certificate File: /etc/ssl/certs/openldap/vmcacert_pub.pem
Here go to "Administration Settings" if you want to manage LDAP users/groups from lxc1, I don't want this, as I'm managing LDAP from vm.

That's all, lxc1 has been successfully set up to authenticate users to vm.

Enable lxcops group to switch to root.
Start visudo, go to the end of the file and append the below two lines:
# The members of lxcops can switch to root
%lxcops ALL=(ALL) ALL

Save and check it:
cat /etc/sudoers | tail -2

Now the final test comes. Log in to lxc1 with jbravo, you will see this:

Creating directory '/home/jbravo'. # Whooops, another problem. Or not. How to deal with home directories?

Switch to root by running: sudo su -
Provide jbravo's pwd, now you're root on lxc1.



I set up
[x] Create Home Directory on Login
as my test user, and the members of lxcops LDAP group are the admins of my containers, but I don't want to store any data for my container admin(s).

What if you need a home that's accessible from anywhere? Well it's a good idea and possible, of course.
Install NFS on vm, and export /home, or create an independent file system with LVM under /export/home (for example).

Export this home via NFS ("no root squash" is a mandatory!!!).
Mount this exported filesystem on vm and on your containers. For example: vm:/export/home /mnt/vmhome

On vm, at yast's "User and Group Administration" -> "Users──Groups──Defaults" for "New Users" edit "Path Prefix for Home Directory", change it from /home to /mnt/vmhome
Don't forget to disable [ ] Create Home Directory on Login
From now on, if you create a user (the existing ones can be modified) its home will be /mnt/vmhome/$USER and this NFS exported home will be available from each container.

I don't need this so I skipped this possibility, but nobody knows the future.

I got what I wanted, now I have a SLES with 3 LXCs (also with SLES), with LDAP (TLS) authentication, with working sudo. Now I'm gonna install some MW apps ;-)

2013. július 15., hétfő

Hercules emulator: how to install CentOS and SLES s390 on Hercules.

Hercules is a mainframe computer emulator, it can run on several host operating systems, such as Linux, Windows, BSD, MacOS and so on. In this how-to I'll show you how to build a working Hercules environment on a Linux OS, and how to install CentOS and SuSE Linux Enterprise Server on Hercules. Please do not forget that all the software I'm using here is legal, free and open source.

I chose OpenSuSE 12.3 as host OS, this is my network topology:

+--------------Host OS OpenSuSE----
|127.0.0.1 (lo)
|192.168.11.19 (eth0)
|192.168.10.20 (tun0)
|
|..+-----------Hercules------------
|..|192.168.10.21
|..|
|..|..+--------CentOS--------------
|..|..|192.168.10.22
|..|..+----------------------------
|..|
|..+-------------------------------

This is a typical home topology, I have a router (it also works as a LAN switch and a Wi-Fi AP), a laptop, a desktop PC (I'm using this to run Hercules) etc. Using this topology your emulated OSes will be able to reach your home network and Internet, and this is true reverse.

Disable Network Manager:
Open a console, then start yast, select Network Devices -> Network Settings -> Global Options.
Network Setup Method: Traditional Method with ifup
(Here I disabled IPv6, because I don't use it.)
By navigating in this Yast module, setup your eth0 network card. I'm using these settings:
Overview (select the network card):
   IP Address: 192.168.11.19, Subnet Mask: /24 (/24 means 255.255.255.0)
(I'm using a router to reach Internet.)
Hostname/DNS
   Hostname: probe, Domain Name: home, Name Server1: 192.168.11.1
Routing
   Default IPv4 Gateway: 192.168.11.1

Check if tun device is available (it exists by default, if not, use Yast to add it):
probe:~ # ls -l /dev/net/tun
crw-rw-rw- 1 root root 10, 200 May 29 15:51 /dev/net/tun

Disable SuSE Firewall, I don't know exactly why, but IP forwarding wasn't working perfectly when I turned on the firewall. Btw, this is a Linux, behind a router, it doesn't store sensitive data, I see no reason for a firewall.
yast -> Security and Users -> Firewall
  alt+d Disable Firewall Automatic Starting
  alt+v Save Settings and Restart Firewall Now
  Finish, Quit.

Firewall Configuration: Summary
  ┌─────────────────
  │Firewall Starting
  │
  │ *  Disable firewall automatic starting
  │ *  Firewall will not start after the configuration is written
  │
  │Internal Zone
  │
  │    Interfaces
  │
  │     +  RTL8101E/RTL8102E PCI Express Fast Ethernet controller / eth0
  │    Open Services, Ports, and Protocols
  │
  │     +  Internal zone is unprotected. All ports are open.
  │
  │Demilitarized Zone
  │
  │ *  No interfaces assigned to this zone.
  │
  │External Zone
  │
  │ *  No interfaces assigned to this zone.

Reboot your box:
probe:~ # shutdown -r now

>>>>>>CentOS part<<<<<<

Download CentOS 4.7 s390, 4.7 is the latest version which supports S/390.
Link: vault.centos.org/4.9/isos/s390/centos-4.7-s390-bindvd.torrent
After you torrented down the ISO file, seed it please, it's important.

As CentOS 4.7 is no longer available via HTTP/FTP, and we downloaded the installation DVD, let's share it via NFS:
probe:~ # zypper install yast2-nfs-server nfs-kernel-server
Then start yast -> Network Services -> NFS Server
   NFS Server: Start
   Firewall Settings: [x] Open Port in Firewall (for future usage, as I have no firewall at the moment)
   Enter NFSv4 domain name: home
Next -> Directories to Export -> Add Directory -> /export. Ok, Ok, Finish, Quit.
probe:~ # mkdir -p /export/centos47dvd
probe:~ # mount -o loop /ISO/cos47.iso /export/centos47dvd
(I renamed the original ISO file.)

Install Hercules:
probe:~ # zypper install hercules

Create your user, who will be running Hercules (I chose zop, it came from "a Z arch. operator"):
probe:~ # useradd -m zop
probe:~ # passwd zop

Create sudo access for zop. Hercules has to be run as root to create TUN interface(s) at startup. Of course I don't want to use root user to run Hercules directly, therefore I give limited root access to zop, zop will be able:
- to run Hercules as root,
- to run the script which sets up IP forwarding, also as root.

probe:~ # visudo
Find these lines:
Defaults targetpw   # ask for the password of the target user i.e. root
ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!

And disable them, use hashmarks for this:
# Defaults targetpw   # ask for the password of the target user i.e. root
# ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!

Find this line:
root ALL=(ALL) ALL

Then add here your primary admin user (this is adm in my case) below root (then you can forget your root password). Also add user zop.
adm ALL=(ALL) ALL
zop ALL=(ALL) NOPASSWD: /usr/bin/hercules, /usr/local/bin/zlinux1_net_setup

Create zlinux1_net_setup script. I edited /etc/sysctl.conf to enable IP forwarding, but no success. After reboot /proc/sys/net/ipv4/ip_forward contained 0, so I created a small script to enable IP forwarding manually.
probe:~ # touch /usr/local/bin/zlinux1_net_setup
probe:~ # chmod 700 /usr/local/bin/zlinux1_net_setup
probe:~ # vi /usr/local/bin/zlinux1_net_setup
(After vi has started, press i, it takes you to insert mode.)
#!/bin/bash
#This small script enables IP forwarding.
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "IP forwarding has been set up."

Once you finished editing, press the ESC key, then :wq<enter> to save and exit.

Leave root, log in as zop. Then create your DASD, its type will be 3390-9. This is a well-known and widely used type of DASD devices, it takes ~8GB on your storage.
zop@probe:~> mkdir h.centos47 && cd h.centos47
zop@probe:~/h.centos47> dasdinit -linux -lfs CENTOS.3390-9 3390-9 ROOT
HHCDU044I Creating 3390 volume ROOT: 10017 cyls, 15 trks/cyl, 56832 bytes/track
HHCDU041I 10017 cylinders successfully written to file CENTOS.3390-9
HHCDI001I DASD initialization successfully completed.
zop@probe:~/h.centos47> ls -l CENTOS.3390-9
-rw-r----- 1 zop users 8539292672 May 29 16:25 CENTOS.3390-9

Create your Hercules config file:
zop@probe:~/h.centos47> touch centos47.cnf
zop@probe:~/h.centos47> vi centos47.cnf

# CPU Configuration, 1 CPU, 1 GB of RAM
 CPUSERIAL 002623              # CPU serial number
 CPUMODEL  2064                # CPU model number
 MODEL     EMULATOR            # STSI returned model
 PLANT     ZZ                  # STSI returned plant
 MANUFACTURER HRC              # STSI returned manufacturer
 LPARNAME  HERCULES            # DIAG 204 returned lparname
 CPUVERID  FD                  # CPU Version Identification
 MAINSIZE  1024                # Main storage size in megabytes
 XPNDSIZE  0                   # Expanded storage size in megabytes
 NUMCPU    1                   # Number of CPUs
 ARCHMODE  ESAME               # Architecture mode S/370, ESA/390 or z/Arch
 ALRF      DISABLE             # ASN-and-LX-Reuse facility
 ECPSVM    NO                  # VM Assist : NO or Level (20 recommended)
#
# OS Tailoring
#
 LOADPARM  0200....            # IPL parameter
 OSTAILOR  LINUX               # OS tailoring
 SYSEPOCH  1900                # Base year for initial TOD clock
#
# Devices
  0009    3215-C  / noprompt
  001F    3270
# Our disk
  0200    3390    /home/zop/h.centos47/CENTOS.3390-9
# The Network
  0E20,0E21  3088    CTCI /dev/net/tun 1500 192.168.10.21 192.168.10.20 255.255.255.0

Save it, then exit from vi.

Start Hercules with the newly created config:
zop@probe:~/h.centos47> sudo hercules -f /home/zop/h.centos47/centos47.cnf

Hercules has just started, check for the below lines:

(Root DASD)
HHCDA020I /home/zop/h.centos47/CENTOS.3390-9 cyls=10017 heads=15 tracks=150255 trklen=56832
(TUN Network interface:)
HHCCT073I 0E20: TUN device tun0 opened
(You can ignore the next one:)
HHCIF005E hercifc: ioctl error doing SIOCDIFADDR on tun0: 25 Inappropriate ioctl for device
(Ready:)
HHCAO001I Hercules Automatic Operator thread started                                     
          tid=7F46AA5CC700, pri=0, pid=6065

On your host machine, check the network interfaces and the router table, 'ifconfig -a' command shows your network preferences:

eth0      Link encap:Ethernet  HWaddr 00:24:21:F6:54:C2 
          inet addr:192.168.11.19  Bcast:192.168.11.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:10335 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6605 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:14011146 (13.3 Mb)  TX bytes:753826 (736.1 Kb)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:33 errors:0 dropped:0 overruns:0 frame:0
          TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4181 (4.0 Kb)  TX bytes:4181 (4.0 Kb)

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 
          inet addr:192.168.10.20  P-t-P:192.168.10.21  Mask:255.255.255.0
          UP POINTOPOINT RUNNING  MTU:1500  Metric:1
          RX packets:600 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:50400 (49.2 Kb)  TX bytes:1344 (1.3 Kb)

We got a new TUN interface created by Hercules, so we don't need to create it manually.

Check the host OS's router table, run the 'route' command (as root):
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.11.1    0.0.0.0         UG    0      0        0 eth0
loopback        *               255.0.0.0       U     0      0        0 lo
link-local      *               255.255.0.0     U     0      0        0 eth0
192.168.10.0    *               255.255.255.0   U     0      0        0 tun0
192.168.11.0    *               255.255.255.0   U     0      0        0 eth0

Let's continue, bring up CentOS installer:
Command ==> ipl /export/centos47dvd/generic.ins

Hercules provides a virtual serial line interface to the operating system you're running on Hercules. You can communicate with Hercules and the running OS in the same window (for Hercules commands type help then press enter). If you want to tell something to Hercules (for example: help) just type in. If you want to tell something to the running OS, then start the line with a simple . (dot) character.

Setup your OS on a virtual S390 box, step-by-step:

Which kind of network device do you intend to use
(e.g. ctc, iucv, qeth, lcs).
Command ==> .ctc

Enter the bus ID and the device number of your CCW devices.
Command ==> .0.0.0E20,0.0.0E21

Enter the FQDN of your new Linux guest (e.g. s390.redhat.com):
Command ==> .zlinux1

Enter a valid IP address of your new Linux guest:
Command ==> .192.168.10.22

Enter a valid network address of the new Linux guest:
Command ==> .255.255.255.0

Enter the IP of your CTC / ESCON / IUCV point-to-point partner:
Command ==> .192.168.10.21

Select which protocol should be used for the CTC interface:
Command ==> .0

Waiting... waiting... after 10+ seconds you will see the adapter is up:
CTC driver Version: 1.63 initialized
divert: not allocating divert_blk for non-ethernet device ctc               
ctc0: read: ch-0.0.0e20, write: ch-0.0.0e21, proto: 0                             
ctc0: connected with remote side

Enter your DNS server(s), separated by colons (:):
Command ==> .192.168.11.1

Enter your DNS search domain(s) (if any), separated by colons (:):
Command ==> .

Enter your DNS search domain(s) (if any), separated by colons (:):                                                                                                   
.                                                                                                                                                                    

It informs you about the network:
                                                                                                                                                                     
ctc0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:192.168.10.22  P-t-P:192.168.10.21  Mask:255.255.255.255                                                                                         
          UP POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1                                                                                                           
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0                                                                                                         
          TX packets:1 errors:0 dropped:0 overruns:0 carrier:0                                                                                                       
          collisions:0 txqueuelen:100                                                                                                                                
          RX bytes:0 (0.0 B)  TX bytes:88 (88.0 B)                                                                                                                   
                                                                                                                                                                     
lo        Link encap:Local Loopback                                                                                                                                  
          inet addr:127.0.0.1  Mask:255.0.0.0                                                                                                                        
          UP LOOPBACK RUNNING  MTU:16436  Metric:1                                                                                                                   
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0                                                                                                         
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0                                                                                                       
          collisions:0 txqueuelen:0                                                                                                                                  
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
                                                                                                                     
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.21   0.0.0.0         255.255.255.255 UH    0      0        0 ctc0
127.0.0.1       0.0.0.0         255.255.255.255 UH    0      0        0 lo
0.0.0.0         192.168.10.21   0.0.0.0         UG    0      0        0 ctc0

Enter DASD range (e.g. 200-203 or 200,201,202,203)                                                                                 
Command ==> .

Wait for these messages:
Starting telnetd and sshd to allow login over the network.                                                                 
Connect now to 192.168.10.22 to start the installation.

192.168.10.22 is reachable, we're gonna install CentOS. But first, we have to run the script we created above, so that we are able to reach our emulated OS from the outer world (outer means: out of the host OS).

Also there is one more additional step we have to do. Our host OS (where Hercules is running) knows how to reach 192.168.10.22, but the other hosts on the network don't. At this point, we have to tell everyone in our network how to reach zlinux1.
- If the host OS is a network gateway which is used as a default gateway for your network, then you don't have to do anything, 192.168.10.22 will be reachable from your entire network.
- If the host OS is not the (default) gateway for your network (this is the situation here, I use a cheap router at home), then you have to add a new route rule to your gateway/router.
  Linux: route add 192.168.10.22 gw 192.168.11.19, then make it permanent.
  Router: find the proper menu on the admin page of your router, this is "Advanced IP routing" in my case. Then add 192.168.10.22/32 (or 192.168.10.22/255.255.255.255) GW: 192.168.11.19.

Once you set up your gateway/router zlinux1 (192.168.10.22) will be reachable from the entire network. Later you can set up port forwarding for your gateway/router, if you want to reach your zlinux1 from Internet.

Let's continue with CentOS installation. Now it's time to telnet or ssh to your CentOS, I'm using telnet from the host machine, but remote access should work from other hosts in your network.
adm@probe:~> telnet 192.168.11.22
Trying 192.168.10.22...
Connected to 192.168.10.22.
Escape character is '^]'.
Welcome to the Red Hat Linux install environment 1.1 for S/390
login: root
Welcome to the Red Hat Linux install environment 1.1 for S/390

Use root as username, no password required.

Language: English
Installation Method: NFS
NFS server name: 192.168.10.20, CentOS 4.7 directory: /export/centos47dvd
-- Running anaconda, the CentOS 4.7 system installer - please wait...
-- DISPLAY variable not set. Starting text mode!
Unable to Start X: Use text mode
Welcome to CentOS 4.7!: OK
Installation Type: Server
FCP Devices: OK
Disk Partitioning Setup: Disk Druid
Partitioning:
  dasda1               2  80136     7043  physical v <- Delete it! It gives you:
  Free space           2  80136     7043  Free space <- Now we have free space.

Move the cursor to the line which represents the free space, then:
New -> Mount point: / -> File System type: ext3 -> Allowable drives: [*] dasda -> Size (MB):  6144, (*) Fixed Size -> OK
Use the remaining space to create a swap partition.

Network Configuration for ctc0:
  [*] Activate on boot
  IP Address: 192.168.10.22
  Netmask: 255.255.255.255
  Point to Point (IP): 192.168.10.21
Miscellaneous Network Settings
  Gateway: 192.168.10.21
  Primary DNS: 192.168.11.1
Hostname Configuration
  (*) manually: zlinux1
Firewall: (*) No firewall
Warning - No Firewall -> Proceed
Security Enhanced Linux -> (*) Disabled
Language Support: English (USA)
Time Zone Selection: Europe/Budapest (Choose your country's capital, or the proper timezone if your country has more than one timezones.)

Root Password: Type here a strong password, then repeat it.

I installed these packages.
Package Group Selection
  │ [*] Editors                     ▒ │
  │ [*] Text-based Internet         ▒ │
  │ [*] Server Configuration Tools  ▒ │
  │ [*] Web Server                  ▒ │
  │ [*] Mail Server                 ▒ │
  │ [*] FTP Server                  ▒ │
  │ [*] Administration Tools        ▒ │
  │ [*] System Tools                ▒ │
  OK
Installation to begin: OK

Now you can see the packages are being installed by Anaconda (the RedHat installer). Once it's done, push on reboot button. It won't do a real reboot. On Hercules main screen wait for these lines:
HHCCP042I SYSCONS interface inactive                                                              
Power down.                                                                                       
CPU0000: SIGP Stop and store status (09) CPU0000, PARM 00000000: CC 0                             
HHCCP010I CPU0000 store status completed.

Now press the ESC key to see Hercules devices. Press ESC again to go back to the command line.





Then leave Hercules (type exit, now without a . character, as this command is for Hercules):
Command ==> exit

Now close everything and reboot your host OS, in order to check everything will be fine if you start your newly installed stuff after a host boot.

Log in to your "server" as zop from a remote machine, then start Hercules, CentOS, then keep them running in the background, without any opened terminal. Now I'm using my Debian box as client, not the host OS SuSE, but the user is the same, I'm using adm at home.

My laptop uses 192.168.11.51 IP address.
adm@cw:~$ ssh zop@192.168.11.19
Password:

zop@probe:~> screen -S h.centos47
#Here you got a new terminal. Check it:
zop@probe:~> screen -ls
There is a screen on:
        6367.h.centos47 (Attached)
1 Socket in /var/run/uscreens/S-zop.
#It's fine. Let's start Hercules:
zop@probe:~> cd h.centos47
zop@probe:~/h.centos47> sudo hercules -f centos47.cnf

Start the installed CentOS:
Command ==> ipl 0200
Wait 15 seconds at the chooser, then you will see:
We are running native (31 bit mode)

Our CentOS has started booting. It takes some minutes as Hercules emulates S/390 on x86, wait for these lines:
CentOS release 4.7 (Final)                                                     
Kernel 2.6.9-78.EL on an s390                                                  
                                                                               
zlinux1 login:                                                                 
Command ==>


Once it's running, get a new terminal window by pressing CTRL+a then the c key. Set up and test host networking:
zop@probe:~> sudo /usr/local/bin/zlinux1_net_setup
IP forwarding has been set up.
zop@probe:~> ping -c 2 192.168.10.22
PING 192.168.11.22 (192.168.10.22) 56(84) bytes of data.
64 bytes from 192.168.10.22: icmp_seq=2 ttl=64 time=0.414 ms
64 bytes from 192.168.10.22: icmp_seq=3 ttl=64 time=0.292 ms

Close this session by typing exit (or press CTRL+d), it brings back screen's window:0 with Hercules (if not press CTRL+a then 0). Detach this screen session, press CTRL+a then d, finally check if it's running in the background:
[detached from 6367.h.centos47]
zop@probe:~> screen -ls
There is a screen on:
    6367.h.centos47    (Detached)
1 Socket in /var/run/uscreens/S-zop.

Now you can close this SSH connection, you can get back your Hercules by running 'screen -R <screenID>'. With screen utility you can keep one or more active terminals (with running programs) in the background. I use this solution, because Hercules needs an opened terminal to run, so I run it in a screen session, then I can drop the connection, Hercules will be running in the background without a real display, keyboard etc.

adm@cw:~$ ssh root@192.168.10.22
root@192.168.11.22's password:
Last login: Mon Jun  3 22:43:23 2013 from 192.168.11.51
[root@zlinux1 ~]# ping -c 4 bix.hu
PING bix.hu (193.239.149.1) 56(84) bytes of data.
64 bytes from www.bix.hu (193.239.149.1): icmp_seq=0 ttl=54 time=6.15 ms
64 bytes from www.bix.hu (193.239.149.1): icmp_seq=1 ttl=54 time=3.92 ms
64 bytes from www.bix.hu (193.239.149.1): icmp_seq=2 ttl=54 time=6.98 ms
64 bytes from www.bix.hu (193.239.149.1): icmp_seq=3 ttl=54 time=3.90 ms

--- bix.hu ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3042ms
rtt min/avg/max/mdev = 3.909/5.241/6.980/1.357 ms, pipe 2

The same from the Hercules screen:


CentOS 4.7 s390 is ready for use, enjoy it! ;-)

>>>>>> SLES part <<<<<<

Setup your Hercules environment, I want to keep my CentOS, so I created a different directory (h.sles11), and a different DASD disk (SLES.3390-9). Don't forget to update the config file (sles11.cnf) with the new disk.
Mount SLES11's DVD to /export/sles11dvd, then run 'ipl /export/sles11dvd/suse.ins', follow the below steps to install SuSE Linux Enterprise Server for S/390:

>>> Linuxrc v3.3.81 (Kernel 3.0.13-0.27-default) <<<

                                                                                                                 
Main Menu

1) Start Installation
2) Settings
3) Expert
4) Exit or Reboot
Command ==> .1

Start Installation                                                                                                                                                   
1) Start Installation or Update                                                                                                             
2) Boot Installed System                                                                                                                           
3) Start Rescue System                                                                                                                               
Command ==> .1

Choose the source medium.
                                                                                                                                            
1) DVD / CD-ROM
2) Network
Command ==> .2

Choose the network protocol.
1) FTP   
2) HTTP  
3) NFS   
4) SMB / CIFS (Windows Share)                                                                                                                 
5) TFTP
Command ==> .3

Device address for read channel

0.0.0e20!>
Command ==> .

Device address for write channel

0.0.0e21!> 
Command ==> .

Select protocol for this CTC device
1) Compatibility mode (default)                                                                                                                
2) Extended mode
3) Compatibility mode for OS/390 and z/OS peers                                                                            
Command ==> .1

Automatic configuration via DHCP?

1) Yes   
2) No
Command ==> .2

Enter your IPv4 address.                                                                                                                             
Example: 192.168.5.77/24

Command ==> .192.168.10.22/32

Enter the IP address of the PLIP partner
192.168.10.22!>                                                                                                                                         

Command ==> .192.168.10.21

Enter the IP address of your name server. Leave empty or enter "+++" if you don't need one                             
net.65346f: ctc0: Connected with remote side
Command ==> .192.168.11.1

Enter the IP address of the NFS server                                                                                            
Command ==> .192.168.10.20

Enter the directory on the server                                                                                                 
 /!>                                                                                                                              
Command ==> ./export/sles11dvd

squashfs: version 4.0 (2009/01/31) Phillip Lougher                                                                                
Loading Installation System (2/6) (45120 kB) -   0%                                                                               
  1%                                                                                                                              
  2%                                                                                                                              

1) X11                                                                                                                            
2) VNC                                                                                                                            
3) SSH                                                                                                                            
4) ASCII Console                                                                                                                  
Command ==> .3

Enter your temporary SSH password                                                                                                 
Command ==> .root111

Wait for this message:
      ***  sshd has been started  ***

Open a terminal (I'm using my debian box for this purpose), then ssh to 192.168.10.22
ssh root@192.168.10.22 (you will be asked for your temporary SSH password)

SUSE Linux Enterprise Server 11 Installation

- there are shells running on consoles 2, 5, 6, 9
- use 'extend' to load extensions (remove with 'extend -r'); extensions are: o bind, gdb, sax2
- network setup: run, e.g. 'dhcpcd eth0'
- sshd: run 'rcsshd start' (don't forget to set a password with 'passwd')

Welcome to the inst-sys on 192.168.10.22 3.0.13-0.27-default s390x

run yast to start the installation

inst-sys:~ # yast
Type yast then press enter, the installer is being started :-) (It takes a while.)

Welcome screen:
  Language: English
  Keyboard Layout: Hungarian (in my case, select the key. layout which fits your needs)
  [x] I Agree to the License Terms.

Disk Activation                                                                                                                 
  [Configure DASD Disks ]
  Select: 0.0.0200 by pressing enter on it, check the coloumn named Sel., you will see a Yes here.
  │Sel.│ Channel│Device    │Type            │Access Type│Use DIAG│Formatted│Partition Information
  │Yes │0.0.0200│/dev/dasda│3990/C2, 3390/0C│RW         │No      │Yes      │
  The DASD has been selected, activate it:
  [Perform Action↓] -> Activate, it makes the DASD usable.
Press next, it takes you back to Disk Activation, press next again.

Installation Mode, (x) New Installation

Clock and Time Zone: (select the proper timezone here)
  Region: Europe
  Time Zone: Hungary
  [x] Hardware Clock Set to UTC

Installation Settings: (I kept the default configuration, it will install 2.4GB of software. Later you can remove unneccessary packages, such as Gnome GUI. Or you can install SLES to a bigger DASD disk, or you can add more DASD disks to your existing environment.
  Install -> Licences: I agree -> Install

At the end of the installation you will be asked to reboot the computer. This won't be a real reboot.
On Hercules main screen wait for these lines:
HHCCP042I SYSCONS interface inactive                                                              
Power down.                                                                                       
CPU0000: SIGP Stop and store status (09) CPU0000, PARM 00000000: CC 0                             
HHCCP010I CPU0000 store status completed.

Now (and in the future) type 'ipl 0200', your SLES will be up in a couple of minutes. If you have already set up networking (see it in CentOS installation process), you can log in to SLES via SSH.

Well, now you have a working SuSE Linux Enterprise Server on a "mainframe" ;-)



>>>>>>Conclusion<<<<<<

Hercules is a great and powerful, free and legal emulator, which opens the door to the world of mainframe computers. In the meanwhile I changed my host OS to Debian Linux, I dropper eth0 and I set up a bridge, finally I installed one more Linux on a Hercules VM, this is a Debian as well. Everything is working as a charm.

Cool stuff:



Thanks for reading this. If you have questions or suggestions, do not hesitate to contact me at: ipl873@gmail.com