Monday, March 19, 2018

SUSE is the trusted source for your Cloud Foundry PaaS

With the recent release of the SUSE Cloud Application Platform based on Cloud Foundry and Kubernetes, you might be wondering what benefits SUSE brings to these open source projects. Let me share with you some details about being a trusted source and enterprise-ready.

SUSE has been a major player in the open source industry for over 25 years, and our longstanding success is rooted deeply in this circle of trust. SUSE knows open source. And we know what it means to be enterprise ready. We have repeatedly and successfully turned open source technologies into powerful enterprise-class software solutions you use today. Key to our success, we have an innovative software management stack that is enterprise grade, we have a powerful build model that enhances our abilities to deliver enterprise-ready software, and we deliver enhanced security protection across the whole stack. But most importantly our engineers are trained and certified and ready to serve you.

With SUSE Cloud Application Platform, SUSE brings this model of trusted, enterprise-grade, open source software to your application delivery teams. You get a complete, open source solution with everything needed to accelerate application delivery, including SUSE Cloud Foundry, Stratos UI, SUSE CaaS Platform (our Kubernetes distribution), and SUSE Enterprise Storage. This is a first of its kind built and running on Kubernetes. If you missed what all that means read this blog post about Applying the Cloud Foundry workflow to Kubernetes.

SUSE Linux Enterprise Server is at the core of the technologies we have created. The package management has been enhanced for containers. We have built an enterprise-grade container host OS called MicroOS which utilizes the enhancements. Updates for MicroOS are released on a continuous delivery model as transactional updates. These updates are atomic, don't influence the running system, and can be rolled back in the event you need to. The system can be manually rebooted to activate the changes that were applied from any updates, or be set up to reboot automatically on a scheduled basis through the rebootmgr tool. Everything delivered is signed and verified from SUSE sources. These features make MicroOS an ideal infrastructure for running Kubernetes, by addressing key reliability, availability, serviceability (RAS) and security requirements for any enterprise environment. You can read further at the openSUSE Kubic project portal which is the upstream project for Container as a Service Platform.

Many developers today are comfortable building containers on linux variants that are not enterprise hardened, but those same containers will most likely be unacceptable in production environments. When you use SUSE Cloud Application Platform, you can be assured that your application is built on containers using SUSE Linux Enterprise base images, and you know that your container will make it out of dev/test and into production without any trouble. We utilize our powerful build model and the Open Build Service to build our base container images using KIWI. These container images are built, signed, and verified in the Open Build Service, and then each image is signed and readied for a public/private notary.

Moving up the SUSE Cloud Application Platform stack, you’ll see how we’ve carried our trademark enterprise-grade value further, into SUSE Cloud Foundry. Signed and readied SUSE images are used as our base for the SUSE Cloud Foundry Fissile Stem Cell that runs both the Cloud Foundry application stack and Build Packs. On top of all of that, these OCI compliant images can be used to implement your application, or a third party application, without the hassle of stripping the base image down and recreating it.

The SUSE build model, utilizing the Open Build Service and other open source software gives us the advantage of having a fully secured, tested, signed, and verified delivery of the entire SUSE Cloud Application Platform, from source to image to notary and into your hands. These sources go through hundreds of quality assurance models daily in our openQA tool as part of our pipeline delivery.

And, while we’re on the absolutely critical topic of security, let’s recognize that there’s more to that than secure images. The whole SUSE Cloud Application Platform solution has been designed and delivered with security in mind. We fully support and integrate Apparmor on the container host (MicroOS). We also support the implementation of UEFI Secure Boot, cryptographically hashing of all files, as well as a read-only root file system. Further hardening can be applied by following our hardening guide for SUSE Linux Enterprise Server.

Finally, and perhaps most importantly, we are here to help. We have trained and certified support engineers ready to jump in on a moment’s notice to dig you out of trouble. That’s part of our core mission here at SUSE. We have available many different support offerings from dedicated to semi-dedicated premium engineers that can work directly with your teams.

SUSE is the trusted source for your Cloud Foundry PaaS. Our complete solution will give you everything you need to streamline lifecycle management of traditional and new cloud native applications. This platform facilitates DevOps process integration to accelerate innovation, improve IT responsiveness, and maximize return on investment.

Have a lot of fun!

Thursday, November 16, 2017

Dell Precision 5520; NVIDIA Optimus PRIME with openSUSE TW and Leap

NVIDIA Optimus is a technology that allows an Intel integrated GPU and discrete NVIDIA GPU to be built into and accessible through a laptop. Getting Optimus graphics to work on Linux requires a few somewhat complicated steps and there are several methods to choose from.
  • disabling one of the devices in BIOS, which may result in improved battery life if the NVIDIA device is disabled, but may not be available with all BIOSes and does not allow GPU switching 
  • using the official Optimus support (PRIME) included with the proprietary NVIDIA driver, which offers the best NVIDIA performance but does not allow GPU switching unless in offload mode which doesn't work yet. 
  • using the PRIME functionality of the open-source nouveau driver, which allows GPU switching and powersaving but offers poor performance compared to the proprietary NVIDIA driver. 
  • using the third-party Bumblebee program to implement Optimus-like functionality, which offers GPU switching and powersaving but requires extra configuration. 
In this blog I'm going to focus on setting things up using the official Optimus support of PRIME output mode included with the proprietary NVIDIA driver. You can read about it here at the NVIDIA devtalk forums.

1) First we will need to disable open source nuoveau driver. You can follow the link here which will walk you through the hard way of installing and setting up the NVIDIA driver.

2) Once the NVIDIA driver is installed and nuoveau is blacklisted and the module is not loading it should look like this.
# lsmod | grep nvidia
nvidia_drm 53248 3
nvidia_modeset 843776 9 nvidia_drm
nvidia 13033472 1190 nvidia_modeset
drm_kms_helper 192512 2 i915,nvidia_drm
drm 417792 6 i915,nvidia_drm,drm_kms_helper

3) You can now begin the process of setting up your xorg.conf file to use both the Intel integrated GPU (iGPU) and the dedicated NVIDIA GPU (dGPU) in the output mode which is explained in the NVIDIA devtalk forum link above.

The below is the /etc/X11/xorg.conf I use with my Dell Precision 5520 running openSUSE TW
Section "Module"
Load "modesetting"

Section "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "PCI:1:0:0"
Option "AllowEmptyInitialConfiguration"

Section "Device" Identifier "Intel" Driver "modesetting" BusID "PCI:0:2:0" Option "AccelMethod" "sna"
The BusID for both cards can be discovered with this command: 
# lspci | grep -e VGA -e NVIDIA
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
01:00.0 3D controller: NVIDIA Corporation GM107GLM [Quadro M1200 Mobile] (rev a2)
The format for the BusID is explained here

4) Once you have your xorg.conf file setup right you will also need to setup your ~/.xinitrc file like this.
xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto

if [ -d /etc/X11/xinit/xinitrc.d ]; then
for f in /etc/X11/xinit/xinitrc.d/*; do
[ -x "$f" ] && . "$f"
unset f

exec dbus-launch startkde
exit 0
Of course I'm setup for KDE. If you want to load Gnome instead then change startkde to startx in your ~/.xinitrc file.

5) Reboot, Login, and enjoy your new setup with discrete NVIDIA graphics with Optimus.

Have a lot of fun!

Note: Nvidia PRIME on Linux Currently does not work like MS Windows where it offloads 3D and performance graphics to the Nvidia GPU. It works in an output method. The definition of the two methods is 

"Output" allows you to use the discrete GPU as the sole source of rendering, just as it would be in a traditional desktop configuration. A screen-sized buffer is shared from the dGPU to the iGPU, and the iGPU does nothing but present it to the screen. 

"Offload" attempts to mimic more closely the functionality of Optimus on Windows. Under normal operation, the iGPU renders everything, from the desktop to the applications. Specific 3D applications can be rendered on the dGPU, and shared to the iGPU for display. When no applications are being rendered on the dGPU, it may be powered off. NVIDIA has no plans to support PRIME render offload at this time. 

So in "Output" mode this will cause the dGPU to always be running.. I've not tested to see how this affects the battery life. 🙂 time will tell. I'll update the post to let everyone know.


Thursday, November 9, 2017

Dell Precision 5520 Touchpad; openSUSE TW and Leap with libinput

Going forward libinput is in favor of using synaptics touchpad driver and will integrate better with future DE environments especially as things move towards the use of Wayland. Some DE environments allow you to set some of the settings today. You can use the following method to make sure you have libinput setup and some most desired settings defined such as 2 and 3 finger clicking. At least for me. ☺

1) Make sure you remove all synaptics packages. There should be maybe 4 or 5 installed by default
# rpm -qa  | grep synaptics

2) Make sure that you have libinput and friends installed (The following outputs are from TW)
# rpm -qa | grep libinput

# rpm -qa | grep xinput

# rpm -qa | grep xdotool

3) Execute the following if you don't have some of them installed.

# zypper in libinput-udev libinput-tools libinput10 libinput10-32bit
xf86-input-libinput xinput xdotool


4) Now your ready to setup some properties for your Touchpad. First lets find
out which ID is yours.

# xinput list | grep Touchpad
⎜   ↳ DLL07BF:01 06CB:7A13 Touchpad             id=14   [slave  pointer  (2)]

On mine the id=14 from the output above. We can use this id to set some properties for the touchpad. There are 3 properties which make sense to me to have enabled in Linux.

Enabling of two-finger and three-finger clicking for the touchpad.  This will allow you to use two-finger for left click and three-finger for middle mouse button actions in Linux such as paste. To enable this use the below command. Notice that I use the 14 which is the id from the previous command in the command options.

# xinput set-prop 14 "libinput Click Method Enabled" 0 1

Another one I like and some might not is the enablement of the natural scrolling ability. To enable this run the following command.

# xinput set-prop 14 "libinput Natural Scrolling Enabled" 1

I also found that my mouse was not moving quite as fast as I would have liked so I changed the pointer speed.

# xinput set-prop 14 "libinput Accel Speed" 1

Those 3 properties I really like to use.  However there are quite a few others you can tweak and tune. Use the following command to get a full list of the properties available to the trackpad. Again makind sure to use your id in the command options.

# xinput list-props 14

If you really like the tapping options you can enable those. Yuk!

There is a small GUI utility you can install called lxinput which has some basic stuff, but not feature complete. Both Gnome and KDE are integrating the ability to use the libinput drivers for the touchpad and both are not feature complete yet. In KDE Plasma you can set the Accel Speed from your System Settings.

To enable some libinput persistence between reboots and sleep modes you can add the following to your xorg configuration.

Edit /etc/X11/xorg.conf.d/40-libinput.conf (This is a default file that's installed with openSUSE)
Modify the Input Class that's labeled with an identifier of "touchpad catchall" to look like the below. Notice I removed the Tapping Option

Section "InputClass"
        Identifier "libinput touchpad catchall"
        MatchIsTouchpad "on"
        MatchDevicePath "/dev/input/event*"
        MatchProduct "DLL07BF:01 06CB:7A13 Touchpad"
        Driver "libinput"
        Option "ClickMethod" "clickfinger"
        Option "NaturalScrolling" "false"
        Option "AccelSpeed" "1"

man libinput 4


Friday, March 24, 2017

VMware Workstation 12.x.x for latest openSUSE Tumbleweed

As you know Tumbleweed is constantly churning and as such there are points in time where some of the libraries required to run VMware Workstation get a new version that isn't compatible with the latest release or the version you have installed. Mostly the Kernel problems get worked around with simple patches so that the vmmon and vmnet drivers can compile correctly and I've posted a few here on my blog with a tool that can help as well. See my post from January.

So what if for example (which is what happened this month with a newer library version of curl) that we have a newer version of library than what is supported by VMware Workstation. So you go ahead and launch vmware, but no VMware Workstation windows opens. The first thing you can do is inspect the log craeted at /tmp/vmware-<your_home_user>/vmware-apploader-<some_number>.log. It will show you in the beginning which libraries it will be using from either SYSTEM or SHIPPED with VMware Workstation. From the output this month we have the following which is suspect in our log.
017-03-24T08:59:45.773-06:00| appLoader| I125: Marking node as SHIPPED.
2017-03-24T08:59:45.773-06:00| appLoader| I125: Marking node as SHIPPED.
2017-03-24T08:59:45.773-06:00| appLoader| I125: Marking node as SYSTEM.
2017-03-24T08:59:45.789-06:00| appLoader| I125: System has OpenSSL version OpenSSL/1.0.2k, ours is OpenSSL/1.0.2k.
2017-03-24T08:59:45.789-06:00| appLoader| I125: System has version 7.53.1 (need 7.51.0) and has been compiled with c-ares support (SSL compatibility? yes).
2017-03-24T08:59:45.789-06:00| appLoader| I125: Marking node as SYSTEM.
Since was marked as SYSTEM we know that it is trying to use the library from our installed packages. libcurl had some recent upgrades. We can try to mitigate this in two ways.

We can execute from the command line forcing to use all SHIPPED libraries from VMware Workstation.
# VMWARE_USE_SHIPPED_LIBS=force vmware &
We can force the one library to be run from the SHIPPED libraries by running the following.
# export LD_LIBRARY_PATH=/usr/lib/vmware/lib/$LD_LIBRARY_PATH
# vmware & 
Both ways are acceptable, but in some cases the later can have better performance in my experience.

Hopefully this can help with future changes in openSUSE Tumbleweed and ensure that you can continue to run VMware Workstation no matter the outcome of the installed packages.


Tuesday, February 21, 2017

OpenStack Summit Boston 2017 Presentation Votes (ends Feb. 21st, 2017 at 11:59pm PST)

Open voting is available for all session submissions until Tuesday, Feb 21, 2017 at 11:59PM PST. This is a great way for the community to decide what they want to hear.

I have submitted a handful of sessions which I hope will be voted for. Below are some short summary's and links to their voting pages.

Avoid the storm! Tips on deploying the Enterprise Cloud
The primary driver for enterprise organizations choosing to deploy a private cloud is to enable on-demand access to the resources that the business needs to respond to market opportunities. But business agility requires availability...
Keys to Successful Data Center Modernization to Infrastructure Agility
Data center modernization and consolidation is the continuous optimization and enhancement of existing data center infrastructure, enabling better support for mission-critical and Mode 1 applications. The companion Key Initiative, "Infrastructure Agility" focuses on Mode 2...
Best Practices with Cloud Native Microservices on OpenStack
It doesn't matter where your at with your implementation of Microservices, but you do need to understand some key fundamentals when it comes to designing and properly deploying on OpenStack. If your just starting out then you will need to learn some key things such as the common characteristics, monolithic vs  microservice, componetization, decentralized governance, to name a few. In this session you'll learn some of these basics and where to start...
Thanks for your support.

Wednesday, January 4, 2017

VMware Workstation 12.5.2 patch for Linux Kernel 4.9

I've rounded up the working patches from the public posts and created my own patch files. You can use my updated VMware module compile script to patch it as well. It also does a bit of cleanup. Grab the script and the patch files from here. Once downloaded then make sure they are all in the same directory and you have made the script executable. Follow the rest of the steps below.

1) Directory should look like this:
# ls -al mkvm* *.patch
-rwxr-xr-x 1 cseader users 2965 Jan  4 21:11        
-rwxr-xr-x 1 cseader users 1457 Sep 26 15:47
-rw-r--r-- 1 cseader users  650 Jan  4 19:16 vmmon-hostif.patch        
-rw-r--r-- 1 cseader users  650 Jan  4 21:21 vmnet-userif.patch
2) Execute with sudo or login as root

# ./                                                
It will immediately start the cleanup and then extracting the VMware source. If the patch files are in the same Directory as it looks like above then it will patch the source for compiling against Kernel 4.9

3) Now Start VMware Workstation.


Monday, September 26, 2016

VMware Workstation / gcc 5.x / Linux; Error: Failed to get gcc info

Well if your like me and you have been sick of this Error: Failed to get gcc information. for awhile now when installing VMware Workstation on the major Linux distributions out there then you likely will want to automate the process of compiling it correctly and doing the rest of the tasks once your compile is complete.

Download my script here and run it after each time your kernel changes of course.

Let me know how your experience is with this or you would like to see some additions or adjustments.

Friday, August 12, 2016

Traffic shaping with virtual pfsense and SLES 12 KVM Host

My traffic shaping has really worked out using pfsense to lower my buffer bloat and get better network performance.

I built my own pfsense from a Dell OptiPlex 990 SFF PC with an Intel Core i5-2400 3.1GHz. I have installed an Intel PRO/1000 VT Quad Port Server Adapter LP PCI-E for more networks and vlans on my network. Traffic shaping was a breeze with pfsense. I of course run pfsense virtualized as the OS itself doesn't work on the hardware physically. BSD seems to have a limited hardware support than Linux these days. It was really the fact that BSD kernel didn't have the right support for this chip and kept hard locking with a kernel error that made no sense. So I have installed SUSE Linux Enterprise Server 12 SP1 as the HOST OS which is humming along with no kernel errors and pfsense is running as a KVM virtual machine. I have bridged all the network interfaces for the virtual machine and it works great. Its been running for 3 months now with no troubles.

Now to try out Sophos UTM. Looks like a fun alternative to pfsense and its Linux based. :-)

Friday, February 12, 2016

OpenStack Summit Austin 2016 Presentation Votes (ends Feb. 17th, 2016)

Open voting is available for all session submissions until Wednesday, Feb 17, 2016 at 11:59PM PST. This is a great way for the community to decide what they want to hear.

 I have submitted a handful of sessions which I hope will be voted for. Below are some short summary's and links to their voting pages.

Operations and Management of your OpenStack Multi-Tenant Platform ( Speaker: Cameron Seader )
You need to deploy your OpenStack infrastructure with ease and without interruption. Audit your OpenStack environment for known vulnerabilites and quickly remediate them. When your growth creates a necessity to fine tune your storage, compute, and control resources you need to quickly determine your bottlenecks and easily...

Shared Filesystems Management (Manila); Forging the way ahead ( Speakers: Cameron Seader, Anika Suri - NetApp )
Manila is the OpenStack shared filesystem service that was announced September 2013. In January 2015 it was labeled as an officially incubated OpenStack program. Now with the current stable release in Liberty, Manila is providing the management of file shares (for example, NFS and CIFS) as a core service to OpenStack. Manila currently works with a variety of vendors, including NetApp, Red Hat Storage (GlusterFS), EMC, IBM GPFS, Hitachi, HPE, and on a base Linux NFS server...

Your Software-Defined Data Center Leading the Way; Agile DevOps ( Speakers: Cameron Seader, Simon Briggs)
With new tooling comes opportunity to change the way we do things.  So take a journey through time, looking at where we have come from and where we are going.... OpenStack leading the way towards a software-defined data center. How can the software-defined data center take us to the cloud with OpenStack.  Will we be able to adapt teams to these new methods? How to get there?  Well learn about Agile development and DevOps and how they meet together to fill the gaps in your software-defined data center approach...

Thanks for your support.

    Friday, June 26, 2015

    SUSE® OpenStack Cloud 5 Admin Appliance – The Easier Way to Start Your Cloud

    If you used the SUSE OpenStack Cloud 4 Admin Appliance, you know it was a downloadable, OpenStack Icehouse-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud. Today, I am excited to tell you about the new Juno-based SUSE OpenStack Cloud 5 Admin Appliance.

    With the SUSE OpenStack Cloud 4 release we moved to a single integrated version. After lots of feedback from users it was clear that no one really cared that downloading something over 10GB mattered as long as it had everything they needed to start an OpenStack private cloud. In version 5 the download is over 15GB, but it actually has all of the software you might need from SLES 11 or SLES 12 compute infrastructure to SUSE Enterprise Storage integration. I was able to integrate the latest SMT mirror repositories at a reduced size and have everything you might need to speed your deployment.

    The new appliance incorporates all of the needed software and repositories to set up, stage and deploy OpenStack Juno in your sandbox lab, or production environments. Coupled with it are the added benefits of automated deployment of highly available cloud services, support for mixed-hypervisor clouds containing KVM, Xen, Microsoft Hyper-V, and VMware vSphere, integration of our award winning, SUSE Enterprise Storage, support from our award-winning, worldwide service organization and integration with SUSE Engineered maintenance processes. In addition, there is integration with tools such as SUSE Studio™ and SUSE Manager to help you build and manage your cloud applications.

    With the availability of SUSE OpenStack Cloud 5, and based on feedback from partners, vendors and customers deploying OpenStack, it was time to release a new and improved Admin Appliance. This new image incorporates the most common use cases and is flexible enough to add in other components such as SMT (Subscription Management Tool) and SUSE Customer Center registration, so you can keep your cloud infrastructure updated.

    The creation of the SUSE OpenStack Cloud 5 Admin Appliance is intended to provide a quick and easy deployment. The partners and vendors we are working with find it useful to quickly test their applications in SUSE OpenStack Cloud and validate their use case. For customers it has become a great tool for deploying production private clouds based on OpenStack.

    With version 5.0.x you can proceed with the following to get moving now with OpenStack.

    Its important that you start by reading and understanding the Deployment Guide before proceeding. This will give you some insight into the requirements and an overall understanding of what is involved to deploy your own private cloud.

    As a companion to the Deployment Guide we have provided a questionnaire that will help you answer and organize the critical steps talked about in the Deployment Guide.

    To help you get moving quickly the SUSE Cloud OpenStack Admin Appliance Guide provides instructions on using the appliance and details a step-by-step installation.

    The most updated guide will always be here

    A new fun feature to try out in SUSE OpenStack Cloud 5 is the batch deployment capability. The appliance includes three templates in the /root home directory ( NFS.yaml, DRBD.yaml, simple-cloud.yaml )

    NFS.yaml will deploy a 2 node controller cluster with NFS shared storage and 2 compute nodes with all of the common OpenStack services running in the cluster.

    DRBD.yaml will deploy a 2 node controller cluster with DRBD replication for the database and messaging queue and 2 compute nodes with all of the common OpenStack services running in the cluster.

    simple-cloud.yaml will deploy 1 controller and 1 compute node with all of the common OpenStack services running in a simple setup. 

    Now is the time. Go out to and start downloading version 5, walk through the Appliance Guide, and see how quick and easy it can be to set up OpenStack. Don't stop there. Make it highly available and set up more than one hypervisor, and don't forget to have a lot of fun.