Pas Apicella

Subscribe to Pas Apicella feed
Information on Pivotal Cloud Foundry (PAS/PKS/PFS) - Continuously deliver any app to every major private and public cloud with a single platformPas Apicellahttp://www.blogger.com/profile/09389663166398991762noreply@blogger.comBlogger423125
Updated: 5 days 20 hours ago

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

Spring Cloud GCP using Spring Data JPA with MySQL 2nd Gen 5.7

Sun, 2018-10-07 19:06
Spring Cloud GCP adds integrations with Spring JDBC so you can run your MySQL or PostgreSQL databases in Google Cloud SQL using Spring JDBC, or other libraries that depend on it like Spring Data JPA. Here is an example of how using Spring Data JPA with "Spring Cloud GCP"

1. First we need a MySQL 2nd Gen 5.7 database to exist in our GCP account which I have previously created as shown below




2. Create a new project using Spring Initializer or how ever you like to create it BUT ensure you have the following dependencies in place. Here is an example of what my pom.xml looks like. In short add the following maven dependencies as per the image below



pom.xml

  
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.5.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
<spring-cloud-gcp.version>1.0.0.RELEASE</spring-cloud-gcp.version>
<spring-cloud.version>Finchley.SR1</spring-cloud.version>
</properties>

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-rest</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-sql-mysql</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>


...

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-dependencies</artifactId>
<version>${spring-cloud-gcp.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

3. Let's start by creating a basic Employee entity as shown below

Employee.java
  
package pas.apj.pa.sb.gcp;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

import javax.persistence.*;

@Entity
@NoArgsConstructor
@AllArgsConstructor
@Data
@Table (name = "employee")
public class Employee {

@Id
@GeneratedValue (strategy = GenerationType.AUTO)
private Long id;

private String name;

}

4. Let's now add a Rest JpaRepository for our Entity

EmployeeRepository.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.data.jpa.repository.JpaRepository;

public interface EmployeeRepository extends JpaRepository <Employee, Long> {
}
5. Let's create a basic RestController to show all our Employee entities

EmployeeRest.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.List;

@RestController
public class EmployeeRest {

private EmployeeRepository employeeRepository;

public EmployeeRest(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@RequestMapping("/emps-rest")
public List<Employee> getAllemps()
{
return employeeRepository.findAll();
}
}

6. Let's create an ApplicationRunner to show our list of Employees as the applications starts up

EmployeeRunner.java
  
package pas.apj.pa.sb.gcp;

import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.stereotype.Component;

@Component
public class EmployeeRunner implements ApplicationRunner {

private EmployeeRepository employeeRepository;

public EmployeeRunner(EmployeeRepository employeeRepository) {
this.employeeRepository = employeeRepository;
}

@Override
public void run(ApplicationArguments args) throws Exception {
employeeRepository.findAll().forEach(System.out::println);
}
}
7. Add a data.sql file to create some records in the database at application startup

data.sql

insert into employee (name) values ('pas');
insert into employee (name) values ('lucia');
insert into employee (name) values ('lucas');
insert into employee (name) values ('siena');

8. Finally our "application.yml" file will need to be able to be able to connect to our MySQL instance running in GCP as well as set some properties for JPA as shown below

spring:
  jpa:
    hibernate:
      ddl-auto: create-drop
      use-new-id-generator-mappings: false
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MariaDB53Dialect
  cloud:
    gcp:
      sql:
        instance-connection-name: fe-papicella:australia-southeast1:apples-mysql-1
        database-name: employees
  datasource:
    initialization-mode: always
    hikari:
      maximum-pool-size: 1


A couple of things in here which are important.

- Set the Hibernate property "dialect: org.hibernate.dialect.MariaDB53Dialect" otherwise without this when hibernate creates tables for your entities you will  run into this error as Cloud SQL database tables are created using the InnoDB storage engine.

ERROR 3161 (HY000): Storage engine MyISAM is disabled (Table creation is disallowed).

- For a demo I don't need multiple DB connections so I set the datasource "maximum-pool-size" to 1

- Notice how I set the "instance-connection-name" and "database-name" which is vital for Spring Cloud SQL to establish database connections

8. Now we need to make sure we have a database called "employees" as per our "application.yml" setting.


9. Now let's run our Spring Boot Application and verify this working showing some output from the logs

- Connection being established

2018-10-08 10:54:37.333  INFO 89922 --- [           main] c.google.cloud.sql.mysql.SocketFactory   : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] via ssl socket.
2018-10-08 10:54:37.335  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : First Cloud SQL connection, generating RSA key pair.
2018-10-08 10:54:38.685  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Obtaining ephemeral certificate for Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1].
2018-10-08 10:54:40.132  INFO 89922 --- [           main] c.g.cloud.sql.core.SslSocketFactory      : Connecting to Cloud SQL instance [fe-papicella:australia-southeast1:apples-mysql-1] on IP [35.197.180.223].
2018-10-08 10:54:40.748  INFO 89922 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Start completed.

- Showing the 4 Employee records

Employee(id=1, name=pas)
Employee(id=2, name=lucia)
Employee(id=3, name=lucas)
Employee(id=4, name=siena)

10. Finally let's make RESTful call as we defined above using HTTPie as follows

pasapicella@pas-macbook:~$ http :8080/emps-rest
HTTP/1.1 200
Content-Type: application/json;charset=UTF-8
Date: Mon, 08 Oct 2018 00:01:42 GMT
Transfer-Encoding: chunked

[
    {
        "id": 1,
        "name": "pas"
    },
    {
        "id": 2,
        "name": "lucia"
    },
    {
        "id": 3,
        "name": "lucas"
    },
    {
        "id": 4,
        "name": "siena"
    }
]

More Information

Spring Cloud GCP
https://cloud.spring.io/spring-cloud-gcp/

Spring Cloud GCP SQL demo (This one is using Spring JDBC)
https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-sql-sample

Categories: Fusion Middleware

PKS - What happens when we create a new namespace with NSX-T

Mon, 2018-09-17 07:02
I previously blogged about the integration between PKS and NSX-T on this post

http://theblasfrompas.blogspot.com/2018/09/pivotal-container-service-pks-with-nsx.html

On this post lets show the impact of what occurs within NSX-T when we create a new Namespace in our K8s cluster.

1. List the K8s clusters with have available

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks clusters

Name    Plan Name  UUID                                  Status     Action
apples  small      d9f258e3-247c-4b4c-9055-629871be896c  succeeded  UPDATE

2. Fetch the cluster config for our cluster into our local Kubectl config

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ pks get-credentials apples

Fetching credentials for cluster apples.
Context set for cluster apples.

You can now switch between clusters by using:
$kubectl config use-context

3. Create a new Namespace for the K8s cluster as shown below

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl create namespace production
namespace "production" created

4. View the Namespaces in the K8s cluster

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-148$ kubectl get ns
NAME          STATUS    AGE
default       Active    12d
kube-public   Active    12d
kube-system   Active    12d
production    Active    9s

Using NSX-T manager the first thing you will see is a new Tier 1 router created for the K8s namespace "production"



Lets view it's configuration via the "Overview" screen


Finally lets see the default "Logical Routes" as shown below



When we push workloads to the "Production" namespace it's this configuration which was dynamically created which we will get out of the box allowing us to expose a "LoadBalancer" service as required across the Pods deployed within the Namspace

Categories: Fusion Middleware

Pivotal Container Service (PKS) with NSX-T on vSphere

Wed, 2018-09-05 06:15
It taken some time but now I officially was able to test PKS with NSX-T rather then using Flannel.

While there is a bit of initial setup to install NSX-T and PKS and then ensure PKS networking is NSX-T, the ease of rolling out multiple Kubernetes clusters with unique networking is greatly simplified by NSX-T. Here I am going to show what happens after pushing a workload to my PKS K8s cluster

First Before we can do anything we need the following...

Pre Steps

1. Ensure you have NSX-T setup and a dashboard UI as follows


2. Ensure you have PKS installed in this example I have it installed on vSphere which at the time of this blog is the only supported / applicable version we can use for NSX-T



PKS tile would need to ensure it's setup to use NSX-T which is done on this page of the tile configuration



3. You can see from the NSX-T manager UI we have a Load Balancers setup as shown below. Navigate to "Load Balancing -> Load Balancers"



And this Load Balancer is backed by few "Virtual Servers", one for http (port 80) and the other for https (port 443), which can be seen when you select the Virtual Servers link


From here we have logical switches created for each of the Kubernetes namespaces. We see two for our load balancer, and the other 3 are for the 3 K8s namespaces which are (default, kube-public, kube-system)


Here is how we verify the namespaces we have in our K8s cluster

pasapicella@pas-macbook:~/pivotal $ kubectl get ns
NAME          STATUS    AGE
default       Active    5h
kube-public   Active    5h
kube-system   Active    5h

All of the logical switches are connected to the T0 Logical Switch by a set of T1 Logical Routers


For these to be accessible, they are linked to the T0 Logical Router via a set of router ports



Now lets push a basic K8s workload and see what NSX-T and PKS give us out of the box...

Steps

Lets create our K8s cluster using the PKS CLI. You will need a PKS CLI user which can be created following this doc

https://docs.pivotal.io/runtimes/pks/1-1/manage-users.html

1. Login using the PKS CLI as follows

$ pks login -k -a api.pks.haas-148.pez.pivotal.io -u pas -p ****

2. Create a cluster as shown below

$ pks create-cluster apples --external-hostname apples.haas-148.pez.pivotal.io --plan small

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress

3. Wait for the cluster to have created as follows

$ pks cluster apples

Name:                     apples
Plan Name:                small
UUID:                     d9f258e3-247c-4b4c-9055-629871be896c
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   apples.haas-148.pez.pivotal.io
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  10.1.1.10

The PKS CLI is basically telling BOSH to go ahead an based on the small plan create me a fully functional/working K8's cluster from VM's to all the processes that go along with it and when it's up keep it up and running for me in the event of failure.

His an example of the one of the WORKER VM's of the cluster shown in vSphere Web Client



4. Using the following YAML file as follows lets push that workload to our K8s cluster

apiVersion: v1
kind: Service
metadata:
  labels:
    app: fortune-service
    deployment: pks-workshop
  name: fortune-service
spec:
  ports:
  - port: 80
    name: ui
  - port: 9080
    name: backend
  - port: 6379
    name: redis
  type: LoadBalancer
  selector:
    app: fortune
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: fortune
    deployment: pks-workshop
  name: fortune
spec:
  containers:
  - image: azwickey/fortune-ui:latest
    name: fortune-ui
    ports:
    - containerPort: 80
      protocol: TCP
  - image: azwickey/fortune-backend-jee:latest
    name: fortune-backend
    ports:
    - containerPort: 9080
      protocol: TCP
  - image: redis
    name: redis
    ports:
    - containerPort: 6379
      protocol: TCP

5. Push the workload as follows once the above YAML is saved to a file

$ kubectl create -f fortune-teller.yml
service "fortune-service" created
pod "fortune" created

6. Verify the PODS are running as follows

$ kubectl get all
NAME         READY     STATUS    RESTARTS   AGE
po/fortune   3/3       Running   0          35s

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                      AGE
svc/fortune-service   LoadBalancer   10.100.200.232   10.195.3.134   80:30591/TCP,9080:32487/TCP,6379:32360/TCP   36s
svc/kubernetes        ClusterIP      10.100.200.1              443/TCP                                      5h

Great so now lets head back to our NSX-T manager UI and see what has been created. From the above output you can see a LB service is created and external IP address assigned

7. First thing you will notice is in "Virtual Servers" we have some new entries for each of our containers as shown below


and ...


Finally the LB we previously had in place shows our "Virtual Servers" added to it's config and routable



More Information

Pivotal Container Service
https://docs.pivotal.io/runtimes/pks/1-1/

VMware NSX-T
https://docs.vmware.com/en/VMware-NSX-T/index.html
Categories: Fusion Middleware

PCF Platform Automation with Concourse (PCF Pipelines)

Mon, 2018-08-20 03:28
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

... and from there setting up Concourse

http://theblasfrompas.blogspot.com/2018/08/deploying-concourse-using-my-bubble.html

.. of course this was created so I can now use the PCF Pipelines to deploy Pivotal Cloud Foundry's Pivotal Application Service (PAS). At a high level this is how to achieve this with some screen shots on the end result

Steps

1. To get started you would use this link as follows. In my example I was deploying PCF to AWS

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf

AWS Install Pipeline

https://github.com/pivotal-cf/pcf-pipelines/tree/master/install-pcf/aws

2. Create a versioned bucket for holding terraform state. on AWS that will look as follows


3. Unless you ensure AWS pre-reqs are meet you won't be able to install PCF so this link highlights all that you will need for installing PCF on AWS such as key pairs, limits, etc

https://docs.pivotal.io/pivotalcf/2-1/customizing/aws.html

4. Create a public DNS zone, get its zone ID we will need that when we setup the pipeline shortly. I also created a self signed public certificate used for my DNS as part of the setup which is required as well.





5. At this point we can download the PCF Pipelines from network.pivotal.io or you can use the link as follows

https://network.pivotal.io/products/pcf-automation/



6. Once you have unzipped the file you would then change to the directory for the write IaaS in my case "aws"

$ cd pcf-pipelines/install-pcf/aws


7. Change all of the CHANGEME values in params.yml with real values for your AWS env. This file is documented so you are clear with what you need to add and where. Most of the values are defaults of course.

8. Login to concourse using the "fly" command line

$ fly --target pcfconcourse login  --concourse-url https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com -k

9. Add pipeline

$ fly -t pcfconcourse set-pipeline -p deploy-pcf -c pipeline.yml -l params.yml

10. Unpause pipeline

$ fly -t pcfconcourse unpause-pipeline -p deploy-pcf

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines/pcf-pipelines/install-pcf/aws$ fly -t pcfconcourse pipelines
name        paused  public
deploy-pcf  no      no

11. The pipeline on concourse will look as follows



12. Now to execute the pipeline you have to manually run 2 tasks

- Run bootstrap-terraform-state job manually




- Run create-infrastructure manually
 


At this point the pipeline will kick of automatically. If you need to run-run due to an issue you can manually kick off the task after you fix what you need to fix. The “wipe-env” task will take everything for PAS down and terraform removes all IaaS config as well.

While running each task current state is shown as per the image below


If successful your AWS account will the PCF VM's created for example


Verify that PCF installed is best done using Pivotal Operations Manager as shown below



More Information

https://network.pivotal.io/products/pcf-automation/


Categories: Fusion Middleware

Deploying concourse using my "Bubble" created Bosh director

Fri, 2018-08-17 23:27
Previously I blogged about using "Bubble" or bosh-bootloader as per the post below.

http://theblasfrompas.blogspot.com/2018/08/bosh-bootloader-or-bubble-as-pronounced.html

Now with bosh director deployed it's time to deploy concourse itself. The process is very straight forward as per the steps below

1. First let's clone the bosh concourse deployment using the GitHub project as follows



2.  Target bosh director and login, must set ENV variables to connect to AWS bosh correctly using "eval" as we did in the previous post. This will set all the ENV variables we need

$ eval "$(bbl print-env -s state)"
$ bosh alias-env aws-env
$ bosh -e aws-env log-in

3. At this point we need to set the external URL which is essentially the load balancer we created when we deployed Bosh Director in the previous post. To get that value run a command as follows where we deployed bosh director from as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bbl lbs -s state
Concourse LB: bosh-director-aws-concourse-lb [bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com]

4. Now lets set that ENV variable as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ export external_url=https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com

5. Now from the cloned bosh concourse directory change to the directory "concourse-bosh-deployment/cluster" as shown below

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ cd concourse-bosh-deployment/cluster

6. Upload stemcell as follows

$ bosh upload-stemcell light-bosh-stemcell-3363.69-aws-xen-hvm-ubuntu-trusty-go_agent.tgz

Verify:

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-bosh stemcells
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name                                     Version  OS             CPI  CID
bosh-aws-xen-hvm-ubuntu-trusty-go_agent  3363.69  ubuntu-trusty  -    ami-0812e8018333d59a6

(*) Currently deployed

1 stemcells

Succeeded
 
7. Now lets deploy concourse as shown below with a command as follows. Make sure you set a password as per "atc_basic_auth.password"

$ bosh deploy -d concourse concourse.yml   -l ../versions.yml   --vars-store cluster-creds.yml   -o operations/basic-auth.yml   -o operations/privileged-http.yml   -o operations/privileged-https.yml   -o operations/tls.yml   -o operations/tls-vars.yml   -o operations/web-network-extension.yml   --var network_name=default   --var external_url=$external_url   --var web_vm_type=default   --var db_vm_type=default   --var db_persistent_disk_type=10GB   --var worker_vm_type=default   --var deployment_name=concourse   --var web_network_name=private   --var web_network_vm_extension=lb  --var atc_basic_auth.username=admin --var atc_basic_auth.password=..... --var worker_ephemeral_disk=500GB_ephemeral_disk -o operations/worker-ephemeral-disk.yml 

8. Once deployed verify the deployment and VM's created as follows

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env deployments
Using environment 'https://10.0.0.6:25555' as client 'admin'

Name       Release(s)          Stemcell(s)                                      Team(s)
concourse  concourse/3.13.0    bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3363.69  -
           garden-runc/1.13.1
           postgres/28

1 deployments

Succeeded
pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 32. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/db78de7f-55c5-42f5-bf9d-20b4ef0fd331      running        z1  10.0.16.5  i-04904fbdd1c7e829f  default  true
web/767b14c8-8fd3-46f0-b74f-0dca2c3b9572     running        z1  10.0.16.4  i-0e5f1275f635bd49d  default  true
worker/cde3ae19-5dbc-4c39-854d-842bbbfbe5cd  running        z1  10.0.16.6  i-0bd44407ec0bd1d8a  default  true

3 vms

Succeeded

9. Navigate to the LB url we used above to access concourse UI using the username/password you set as per the deployment

https://bosh-director-aws-concourse-lb-f827ef220d02270c.elb.ap-southeast-2.amazonaws.com/


10. Finally we can see of Bosh Director and Concourse deployment VM's on our AWS instance EC2 page as follows



More Information

Categories: Fusion Middleware

bosh-bootloader or "Bubble" as pronounced and how to get started

Wed, 2018-08-15 06:50
I decided to try out installing bosh using the bosh-bootloader CLI today. bbl currently supports AWS, GCP, Microsoft Azure, Openstack and vSphere. In this example I started with AWS but it won't be long until try this on GCP

It's worth noting that this can all be done remotely from your laptop once you give BBL the access it needs for the cloud environment.

Steps

1. First your going to need the bosh v2 CLI which you can install here

  https://bosh.io/docs/cli-v2/

Verify:

pasapicella@pas-macbook:~$ bosh -version
version 5.0.1-2432e5e9-2018-07-18T21:41:03Z

Succeeded

2. Second you will need Terrform having a Mac I use brew

$ brew install terrafrom

Verify:

pasapicella@pas-macbook:~$ terraform version
Terraform v0.11.7

3. Now we need to install BBL which is done as follows on a Mac. I also show how to install bosh CLI as well if you missed step 1

$ brew tap cloudfoundry/tap
$ brew install bosh-cli
$ brew install bbl

Further instructions on this link

https://github.com/cloudfoundry/bosh-bootloader

4. At this point your ready to deploy BOSH the instructions for AWS are here

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

Pretty straight forward but here is what I did at this point

5. In order for bbl to interact with AWS, an IAM user must be created. This user will be issuing API requests to create the infrastructure such as EC2 instances, load balancers, subnets, etc.

The user must have the following policy which I just copy into my clipboard to use later:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:*",
                "elasticloadbalancing:*",
                "cloudformation:*",
                "iam:*",
                "kms:*",
                "route53:*",
                "ec2:*"
            ],
            "Resource": "*"
        }
    ]
}


$ aws iam create-user --user-name "bbl-user”

This next command requires you to copy the policy JSON above

$ aws iam put-user-policy --user-name "bbl-user" --policy-name "bbl-policy" --policy-document "$(pbpaste)"

$ aws iam create-access-key --user-name "bbl-user"

You will get a JSON response at this point as follows. Save file created here as it’s used next few steps

{
    "AccessKey": {
        "UserName": "bbl-user",
        "Status": "Active",
        "CreateDate": "2018-08-07T03:30:39.993Z",
        "SecretAccessKey": ".....",
        "AccessKeyId": "........"
    }
}

In the next step BBL will use these commands to create infrastructure on AWS.

6. Now we can pave the infrastructure, Create a Jumpbox, and Create a BOSH Director as well as a LB which I need as I plan to deploy concourse using BOSH.

$ bbl up --aws-access-key-id ..... --aws-secret-access-key ... --aws-region ap-southeast-2 --lb-type concourse --name bosh-director -d -s state --iaas aws

The process takes around 5-8 minutes.

The bbl state directory contains all of the files that were used to create your bosh director. This should be checked in to version control, so that you have all the information necessary to later destroy or update this environment at a later date.

7.  Finally we target the the bosh director as follows. Keep in mind everything we need is stored in the "state" directory as per above

$ eval "$(bbl print-env -s state)"

8. This will set various ENV variables which the bosh CLI will then use to target the bosh director.  Now we need to just prepare ourselves to actually log in. I use a script as follows

target-bosh.sh

bbl director-ca-cert -s state > bosh.crt
export BOSH_CA_CERT=bosh.crt

export BOSH_ENVIRONMENT=$(bbl director-address -s state)

echo ""
echo "Username: $(bbl director-username -s state)"
echo "Password: $(bbl director-password -s state)"
echo ""
echo "Log in using -> bosh log-in"
echo ""

bosh alias-env aws-env

echo "ENV set to -> aws-env"
echo ""

Output When run with password omitted ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ ./target-bosh.sh

Username: admin
Password: ......

Log in using -> bosh log-in

Using environment 'https://10.0.0.6:25555' as client 'admin'

Name      bosh-bosh-director-aws
UUID      3ade0d28-77e6-4b5b-9be7-323a813ac87c
Version   266.4.0 (00000000)
CPI       aws_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      admin

Succeeded
ENV set to -> aws-env

9. Finally lets log-in as follows

$ bosh -e aws-env log-in

Output ->

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env log-in
Successfully authenticated with UAA

Succeeded

10. Last but not least lets see what VM's bosh has under management. These VM's are for my concourse I installed. If you would like to install concourse use this link - https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/concourse.md

pasapicella@pas-macbook:~/pivotal/aws/pcf-pipelines$ bosh -e aws-env vms
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 20. Done

Deployment 'concourse'

Instance                                     Process State  AZ  IPs        VM CID               VM Type  Active
db/ec8aa978-1ec5-4402-9835-9a1cbce9c1e5      running        z1  10.0.16.5  i-0d33949ece572beeb  default  true
web/686546be-09d1-43ec-bbb7-d96bb5edc3df     running        z1  10.0.16.4  i-03af52f574399af28  default  true
worker/679be815-6250-477c-899c-b962076f26f5  running        z1  10.0.16.6  i-0efac99165e12f2e6  default  true

3 vms

Succeeded

More Information

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/getting-started-aws.md

https://github.com/cloudfoundry/bosh-bootloader/blob/master/docs/howto-target-bosh-director.md


Categories: Fusion Middleware

Using CFDOT (CF Diego Operator Toolkit) on Pivotal Cloud Foundry

Tue, 2018-06-19 22:12
I decided to use CFDOT (CF Diego Operator Toolkit) on my PCF 2.1 vSphere ENV today. Setting it up isn't required as it's installed out of the box on Bosh Managed Diego Cell as shown below. It gives nice detailed information around Cell Capacity and other useful metrics.

1. SSH into Ops Manager VM

pasapicella@pas-macbook:~/pivotal/PCF/APJ/PEZ-HaaS/haas-165$ ssh ubuntu@opsmgr.haas-165.mydns.com
Unauthorized use is strictly prohibited. All access and activity
is subject to logging and monitoring.
ubuntu@opsmgr.haas-165.mydns.com's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 4.4.0-124-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

...

ubuntu@bosh-stemcell:~$

At this point you will need to log into the Bosh Director as described below


2. Issue a command as follows once logged in to get all VM's. We just need a name of one of the Diego CELL VM's

ubuntu@bosh-stemcell:~$ bosh -e vmware vms --column=Instance --column="Process State"
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Task 12086
Task 12087
Task 12086 done

Task 12087 done

Deployment 'cf-edc48fe108f1e5581fba'

Instance                                                            Process State
backup-prepare/eff97a4b-15a2-425c-8333-1dbaaefbb5ff                 running
clock_global/d77c485f-7d7c-43ae-b9de-584411ffa0bd                   running
cloud_controller/874dd06c-b76e-427a-943e-dea66f0345b6               running
cloud_controller/bba1819e-b7f4-4a34-897a-c78f6189667c               running
cloud_controller_worker/803bfb3f-653b-4311-b831-9b76e602714e        running
cloud_controller_worker/f5956edb-9510-4d99-a0f7-8545831b45ec        running
consul_server/3bfdc6bd-2f1d-4607-8564-148fadd4bc3d                  running
consul_server/4927cc4b-4531-429b-b379-83e283b779ba                  running
consul_server/69c1c5ee-8288-49bd-9112-afe05fe536f4                  running
diego_brain/01d3914c-2ab1-4b75-ada7-2267f34faee6                    running
diego_brain/564cf558-c2dc-4045-a4d1-54f633633dd6                    running
diego_brain/a22c2621-4278-4a83-94ee-34287deb9310                    running
diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf                     running
diego_cell/9452a3b4-d40c-49f1-9dbf-8d74202f7dff                     running
diego_cell/dfc8e214-2e59-4050-9312-1113662ce79f                     running

...

3. SSH into a Bosh managed Diego Cell VM. Use the correct name for one of your Diego Cells and your deployment name for CF itself

ubuntu@bosh-stemcell:~$ bosh -e vmware -d cf-edc48fe108f1e5581fba ssh diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf
Using environment '1.1.1.1' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Using deployment 'cf-edc48fe108f1e5581fba'

....

4. Run a command as follows "sudo su -"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~$ sudo su -

5. Verify CFDOT CLI is installed using "cfdot"

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot
A command-line tool to interact with a Cloud Foundry Diego deployment

Usage:
  cfdot [command]

Available Commands:
  actual-lrp-groups            List actual LRP groups
  actual-lrp-groups-for-guid   List actual LRP groups for a process guid
  cancel-task                  Cancel task
  cell                         Show the specified cell presence
  cell-state                   Show the specified cell state
  cell-states                  Show cell states for all cells
  cells                        List registered cell presences
  claim-lock                   Claim Locket lock
  claim-presence               Claim Locket presence
  create-desired-lrp           Create a desired LRP
  create-task                  Create a Task
  delete-desired-lrp           Delete a desired LRP
  delete-task                  Delete a Task
  desired-lrp                  Show the specified desired LRP
  desired-lrp-scheduling-infos List desired LRP scheduling infos
  desired-lrps                 List desired LRPs
  domains                      List domains
  help                         Get help on [command]
  locks                        List Locket locks
  lrp-events                   Subscribe to BBS LRP events
  presences                    List Locket presences
  release-lock                 Release Locket lock
  retire-actual-lrp            Retire actual LRP by index and process guid
  set-domain                   Set domain
  task                         Display task
  task-events                  Subscribe to BBS Task events
  tasks                        List tasks in BBS
  update-desired-lrp           Update a desired LRP

Flags:
  -h, --help   help for cfdot

Use "cfdot [command] --help" for more information about a command.

6. Lets see what each Diego CELL has for Capacity as a whole

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cells | jq -r
{
  "cell_id": "7ca12f7d-737f-47fb-a8bc-91d73e4791cf",
  "rep_address": "http://10.193.229.62:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://7ca12f7d-737f-47fb-a8bc-91d73e4791cf.cell.service.cf.internal:1801"
}
{
  "cell_id": "9452a3b4-d40c-49f1-9dbf-8d74202f7dff",
  "rep_address": "http://10.193.229.61:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://9452a3b4-d40c-49f1-9dbf-8d74202f7dff.cell.service.cf.internal:1801"
}
{
  "cell_id": "dfc8e214-2e59-4050-9312-1113662ce79f",
  "rep_address": "http://10.193.229.63:1800",
  "zone": "RP01",
  "capacity": {
    "memory_mb": 16047,
    "disk_mb": 103549,
    "containers": 249
  },
  "rootfs_provider_list": [
    {
      "name": "preloaded",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "preloaded+layer",
      "properties": [
        "cflinuxfs2"
      ]
    },
    {
      "name": "docker"
    }
  ],
  "rep_url": "https://dfc8e214-2e59-4050-9312-1113662ce79f.cell.service.cf.internal:1801"
}

7. Finally lets see what available resources we have on each Diego Cell

diego_cell/7ca12f7d-737f-47fb-a8bc-91d73e4791cf:~# cfdot cell-states | jq '"Cell Id -> \(.cell_id): L -> \(.LRPs | length), Avaliable Resources [MemoryMB] -> \(.AvailableResources.MemoryMB), Avaliable Resources [DiskMB] -> \(.AvailableResources.DiskMB), Avaliable Resources [Containers] -> \(.AvailableResources.Containers)"' -r

Cell Id -> 7ca12f7d-737f-47fb-a8bc-91d73e4791cf: L -> 17, Avaliable Resources [MemoryMB] -> 6843, Avaliable Resources [DiskMB] -> 86141, Avaliable Resources [Containers] -> 232
Cell Id -> 9452a3b4-d40c-49f1-9dbf-8d74202f7dff: L -> 14, Avaliable Resources [MemoryMB] -> 5371, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235
Cell Id -> dfc8e214-2e59-4050-9312-1113662ce79f: L -> 14, Avaliable Resources [MemoryMB] -> 4015, Avaliable Resources [DiskMB] -> 89213, Avaliable Resources [Containers] -> 235

More Information

https://github.com/cloudfoundry/cfdot


Categories: Fusion Middleware

Deploying a Spring Boot Application on a Pivotal Container Service (PKS) Cluster on GCP

Wed, 2018-05-09 00:31
I have been "cf pushing" for as long as I can remember so with Pivotal Container Service (PKS) let's walk through the process of deploying a basic Spring Boot Application with a PKS cluster running on GCP.

Few assumptions:

1. PKS is already installed as shown by my Operations Manager UI below



2. A PKS Cluster already exists as shown by the command below

pasapicella@pas-macbook:~$ pks list-clusters

Name        Plan Name  UUID                                  Status     Action
my-cluster  small      1230fafb-b5a5-4f9f-9327-55f0b8254906  succeeded  CREATE

Example:

We will be using this Spring Boot application at the following GitHub URL

  https://github.com/papicella/springboot-actuator-2-demo


1. In this example my Spring Boot application has what is required within my maven build.xml file to allow me to create a Docker image as shown below
  
<!-- tag::plugin[] -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
<!-- end::plugin[] -->

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>unpack</id>
<phase>package</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}</artifactId>
<version>${project.version}</version>
</artifactItem>
</artifactItems>
</configuration>
</execution>
</executions>
</plugin>

2. Once a docker image was built I then pushed that to Docker Hub as shown below



3. Now we will need a PKS cluster as shown below before we can continue

pasapicella@pas-macbook:~$ pks cluster my-cluster

Name:                     my-cluster
Plan Name:                small
UUID:                     1230fafb-b5a5-4f9f-9327-55f0b8254906
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   cluster1.pks.pas-apples.online
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  192.168.20.10

4. Now we want to wire "kubectl" using a command as follows

pasapicella@pas-macbook:~$ pks get-credentials my-cluster

Fetching credentials for cluster my-cluster.
Context set for cluster my-cluster.

You can now switch between clusters by using:
$kubectl config use-context

pasapicella@pas-macbook:~$ kubectl cluster-info
Kubernetes master is running at https://cluster1.pks.pas-apples.online:8443
Heapster is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-influxdb is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5. Now we are ready to deploy a Spring Boot workload to our cluster. To do that lets download the YAML file below

https://github.com/papicella/springboot-actuator-2-demo/blob/master/lb-withspringboot.yml

Once downloaded create a deployment as follows

$ kubectl create -f lb-withspringboot.yml

pasapicella@pas-macbook:~$ kubectl create -f lb-withspringboot.yml
service "spring-boot-service" created
deployment "spring-boot-deployment" created

6. Now let’s verify our deployment using some kubectl commands as follows

$ kubectl get deployment spring-boot-deployment
$ kubectl get pods
$ kubectl get svc

pasapicella@pas-macbook:~$ kubectl get deployment spring-boot-deployment
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
spring-boot-deployment   1         1         1            1           1m

pasapicella@pas-macbook:~$ kubectl get pods
NAME                                     READY     STATUS    RESTARTS   AGE
spring-boot-deployment-ccd947455-6clwv   1/1       Running   0          2m

pasapicella@pas-macbook:~$ kubectl get svc
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
kubernetes            ClusterIP      10.100.200.1               443/TCP          23m
spring-boot-service   LoadBalancer   10.100.200.137   35.197.187.43   8080:31408/TCP   2m

7. Using the external IP Address we got GCP to expose for us we can access our Spring Boot application on port 8080 as shown below using the external IP address. In this example

http://35.197.187.43:8080/



RESTful End Point

pasapicella@pas-macbook:~$ http http://35.197.187.43:8080/employees/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Wed, 09 May 2018 05:26:19 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "employee": {
            "href": "http://35.197.187.43:8080/employees/1"
        },
        "self": {
            "href": "http://35.197.187.43:8080/employees/1"
        }
    },
    "name": "pas"
}

More Information

Using PKS
https://docs.pivotal.io/runtimes/pks/1-0/using.html

Categories: Fusion Middleware

Using/Verifying the Autoscale service from Apps Manager UI in 5 minutes

Fri, 2018-04-20 04:59
Recently at a customer site I was asked to show how the Autoscale service shipped by default with Pivotal Cloud Foundry would work. Here is how we demoed that in less then 5 minutes.

1. Select an application to Autoscale and click on the "Autoscaling" radio option.


2. Select "Manage Autoscaling" link as shown below.


3. Set the maximum instance limit to "4" and click Save as shown below. You can also set minimum to 1 instance if you want to which will make it easier to verify the scaling of instances as one instance can easily be put under pressure.


4. Now lets set a "Scaling Rule" by clicking on the "Edit" link as shown below.


5. Now lets add a CPU rule by clicking on the "Add" link as shown below.


6. Now define a CPU rule as shown below and click on Save. Don't forget to make it active using the radio option. In this example we use very low threshold BUT it would be better to increase this to something more realistic like 30% and 60% respectively.




Now at this point we are ready to test the Autoscale service BUT to do that we are going to have to create some load. Many different ways to do that but "ab" on my Mac was the fastest way.

8. Create some load on an endpoint for your application to force CPU utilization to increase as shown below

pasapicella@pas-macbook:~$ ab -n 10000 -c 25 http://springboot-actuator-appsmanager-delightful-jaguar.cfapps.io/employees
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking springboot-actuator-appsmanager-delightful-jaguar.cfapps.io (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests

....

9. If you return to Apps Manager UI soon enough you will see that the Autoscale service has fired events to add more instances as per the screen shots below.




It's worth noting that the CF CLI Plugin for Autoscale can also show us what we have defined as as shown below. More information on this plugin is as follows

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

View which applications are using the Autoscaler service:

pasapicella@pas-macbook:~$ cf autoscaling-apps
Presenting autoscaler apps in org apples-pivotal-org / space development as papicella@pivotal.io
OK
Name                              Guid                                   Enabled   Min Instances   Max Instances
springboot-actuator-appsmanager   6c137fea-6a99-4069-8031-a2aa3978804c   true      2               4

View events for an application that has Autoscaler service bound to it:

pasapicella@pas-macbook:~$ cf autoscaling-events springboot-actuator-appsmanager
Presenting autoscaler events for app springboot-actuator-appsmanager for org apples-pivotal-org / space development as papicella@pivotal.io
OK
Time                   Description
2018-04-20T09:56:30Z   Scaled down from 3 to 2 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:55:56Z   Scaled down from 4 to 3 instances. All metrics are currently below minimum thresholds.
2018-04-20T09:54:46Z   Can not scale up. At max limit of 4 instances. Current CPU of 20.75% is above upper threshold of 8.00%.
2018-04-20T09:54:11Z   Can not scale up. At max limit of 4 instances. Current CPU of 30.53% is above upper threshold of 8.00%.
2018-04-20T09:53:36Z   Can not scale up. At max limit of 4 instances. Current CPU of 32.14% is above upper threshold of 8.00%.
2018-04-20T09:53:02Z   Can not scale up. At max limit of 4 instances. Current CPU of 31.51% is above upper threshold of 8.00%.
2018-04-20T09:52:27Z   Scaled up from 3 to 4 instances. Current CPU of 19.59% is above upper threshold of 8.00%.
2018-04-20T09:51:51Z   Scaled up from 2 to 3 instances. Current CPU of 8.99% is above upper threshold of 8.00%.
2018-04-20T09:13:24Z   Scaling from 1 to 2 instances: app below minimum instance limit
2018-04-20T09:13:23Z   Enabled autoscaling.

More Information

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler-cli.html#install

https://docs.run.pivotal.io/appsman-services/autoscaler/using-autoscaler.html

Categories: Fusion Middleware

Spring Cloud Services CF CLI Plugin

Tue, 2018-04-10 06:27
The Spring Cloud Services plugin for the Cloud Foundry Command Line Interface tool (cf CLI) adds commands for interacting with Spring Cloud Services service instances. It provides easy access to functionality relating to the Config Server and Service Registry; for example, it can be used to send values to a Config Server service instance for encryption or to list all applications registered with a Service Registry service instance.

Here is a simple example of how we can view various bound apps for a Service Registry

1. Install the CF CLI Plugin for Spring Cloud Services using the link below

$ cf add-plugin-repo CF-Community https://plugins.cloudfoundry.org

$ cf install-plugin -r CF-Community "Spring Cloud Services"

2. Now in Apps Manager UI we have a Service Registry instance with some bound micro services as shown below



3. Now we can use the SCS CF CLI Plugin to also get this information

pasapicella@pas-macbook:~$ cf service-registry-list eureka-service
Listing service registry eureka-service in org apples-pivotal-org / space scs-demo as papicella@pivotal.io...
OK

Service instance: eureka-service
Server URL: https://eureka-fcf42b1c-6b85-444c-9a43-fee82f2c68c3.cfapps.io/

eureka app name cf app name    cf instance index zone      status
EDGE-SERVICE    edge-service   0                 cfapps.io UP
COFFEE-SERVICE  coffee-service 0                 cfapps.io UP

The full list of plugin commands are as shown in the screen shot below. 

Note: Use "cf plugins" to get this list once installed


More Information

http://docs.pivotal.io/spring-cloud-services/1-5/common/cf-cli-plugin.html

Categories: Fusion Middleware

Deploying my first Pivotal Container Service (PKS) workload to my PKS cluster

Wed, 2018-04-04 01:15
If you followed along on the previous blogs you would of installed PKS 1.0 on GCP (Google Cloud Platform) and created your first PKS cluster and wired it into kubectl as well as provided an external load balancer as per the previous two posts.

Previous posts:

Install Pivotal Container Service (PKS) on GCP and getting started
http://theblasfrompas.blogspot.com.au/2018/04/install-pivotal-container-service-pks.html

Wiring kubectl / Setup external LB on GCP into Pivotal Container Service (PKS) clusters to get started
http://theblasfrompas.blogspot.com.au/2018/04/wiring-kubectl-setup-external-lb-on-gcp.html

So lets now create our first workload as shown below

1. Download YML demo from here

https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx-lb.yml

2. Deploy as shown below

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl create -f nginx-lb.yml
service "nginx" created
deployment "nginx" created

3. Check current status

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-679dc9c764-8cwzq   1/1       Running   0          22s
nginx-679dc9c764-p8tf2   1/1       Running   0          22s
nginx-679dc9c764-s79mp   1/1       Running   0          22s

4. Wait for External IP address of the nginx service to be assigned

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS/demo-workload$ kubectl get svc
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.100.200.1               443/TCP        17h
nginx        LoadBalancer   10.100.200.143   35.189.23.119   80:30481/TCP   1m

5. In a browser access the K8's workload as follows, using the external IP

http://35.189.23.119



More Info

https://docs.pivotal.io/runtimes/pks/1-0/index.html
Categories: Fusion Middleware

Wiring kubectl / Setup external LB on GCP into Pivotal Container Service (PKS) clusters to get started

Tue, 2018-04-03 22:45
Now that I have PCF 2.1 running with PKS 1.0 installed and a cluster up and running how would I get started accessing that cluster? Here are the steps for GCP (Google Cloud Platform) install of PCF 2.1 with PKS 1.0. It goes through the requirements around an External LB for the cluster as well as wiring kubectl into the cluster to get started creating deployments.

Previous blog as follows:

http://theblasfrompas.blogspot.com.au/2018/04/install-pivotal-container-service-pks.html

1. First we will want an external Load Balancer for our K8's clusters which will need to exist and it would be a TCP Load balancer using Port 8443 which is the port the master node would run on. The external IP address is what you will need to use in the next step



2. Create a Firewall Rule for the LB with details as follows.

Note: the LB name is "pks-cluster-api-1". Make sure to include the network tag and select the network you installed PKS on.

  • Network: Make sure to select the right network. Choose the value that matches with the VPC Network name you installed PKS on
  • Ingress - Allow
  • Target: pks-cluster-api-1
  • Source: 0.0.0.0/0
  • Ports: tcp:8443





3. Now you could easily just create a cluster using the external IP address from above or use a DNS entry which is mapped to the external IP address which is what I have done so I have use a FQDN instead

pasapicella@pas-macbook:~$ pks create-cluster my-cluster --external-hostname cluster1.pks.pas-apples.online --plan small

Name:                     my-cluster
Plan Name:                small
UUID:                     64a086ce-c94f-4c51-95f8-5a5edb3d1476
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   cluster1.pks.pas-apples.online
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress


4. Now just wait a while while it creates a VM's and runs some tests , it's roughly around 10 minutes. Once done you will see the cluster as created as follows

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ pks list-clusters

Name        Plan Name  UUID                                  Status     Action
my-cluster  small      64a086ce-c94f-4c51-95f8-5a5edb3d1476  succeeded  CREATE

5. Now one of the VM's created would be the master Vm for the cluster , their a few ways to determine the master VM as shown below.

5.1. Use GCP Console VM instances page and filter by "master"



5.2. Run a bosh command to view the VM's of your deployments. We are interested in the VM's for our cluster service. The master instance is named as "master/ID" as shown below.

$ bosh -e gcp vms --column=Instance --column "Process State" --column "VM CID"

Task 187. Done

Deployment 'service-instance_64a086ce-c94f-4c51-95f8-5a5edb3d1476'

Instance                                     Process State  VM CID
master/13b42afb-bd7c-4141-95e4-68e8579b015e  running        vm-4cfe9d2e-b26c-495c-4a62-77753ce792ca
worker/490a184e-575b-43ab-b8d0-169de6d708ad  running        vm-70cd3928-317c-400f-45ab-caf6fa8bd3a4
worker/79a51a29-2cef-47f1-a6e1-25580fcc58e5  running        vm-e3aa47d8-bb64-4feb-4823-067d7a4d4f2c
worker/f1f093e2-88bd-48ae-8ffe-b06944ea0a9b  running        vm-e14dde3f-b6fa-4dca-7f82-561da9c03d33

4 vms

6. Attach the VM to the load balancer backend configuration as shown below.



7. Now we can get the credentials from PKS CLI and pass them to kubectl as shown below

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ pks get-credentials my-cluster

Fetching credentials for cluster my-cluster.
Context set for cluster my-cluster.

You can now switch between clusters by using:
$kubectl config use-context

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ kubectl cluster-info
Kubernetes master is running at https://cluster1.pks.domain-name:8443
Heapster is running at https://cluster1.pks.domain-name:8443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://cluster1.pks.domain-name:8443/api/v1/namespaces/kube-system/services/kube-dns/proxy
monitoring-influxdb is running at https://cluster1.pks.domain-name:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

8. To verify it worked for you here are some commands you would run. The "kubectl cluster-info" is one of those.

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ kubectl get pods
No resources found.

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ kubectl get deployments
No resources found.

9. Finally lets start the Kubernetes UI to monitor this cluster. We do that as easily as this.

pasapicella@pas-macbook:~/pivotal/GCP/install/21/PKS$ kubectl proxy
Starting to serve on 127.0.0.1:8001  

The UI URL requires you to append /ui to the url above

Eg: http://127.0.0.1:8001/ui

Note: It will prompt you for the kubectl config file which would be in the $HOME/.kube/config file. Failure to present this means the UI won't show you much and give lost of warnings




More Info

https://docs.pivotal.io/runtimes/pks/1-0/index.html
Categories: Fusion Middleware

Install Pivotal Container Service (PKS) on GCP and getting started

Tue, 2018-04-03 20:15
With the release of Pivotal Cloud Foundry 2.1 (PCF) I decided this time to install Pivotal Application Service (PAS) as well as Pivotal Container Service (PKS) using the one Bosh Director which isn't recommended for production installs BUT ok for dev installs. Once installed you will have both the PAS tile and PKS tile as shown below.

https://content.pivotal.io/blog/pivotal-cloud-foundry-2-1-adds-cloud-native-net-envoy-native-service-discovery-to-boost-your-transformation


So here is how to get started with PKS once it's installed

1. Create a user for the PKS client to login with.

1.1. ssh into the ops manager VM

1.2. Target the UAA endpoint for PKS this was setup in the PKS tile

ubuntu@opsman-pcf:~$ uaac target https://PKS-ENDPOINT:8443 --skip-ssl-validation
Unknown key: Max-Age = 86400

Target: https://PKS-ENDPOINT:8443

1.3. Authenticate with UAA using the secret you retrieve from the PKS tile / Credentials tab as shown in the image below. Run the following command, replacing UAA-ADMIN-SECRET with your UAA admin secret

ubuntu@opsman-pcf:~$ uaac token client get admin -s UAA-ADMIN-SECRET
Unknown key: Max-Age = 86400

Successfully fetched token via client credentials grant.
Target: https://PKS-ENDPIONT:8443
Context: admin, from client admin



1.4. Create an ADMIN user as shown below using the UAA-ADMIN-SECRET password obtained form ops manager UI as shown above

ubuntu@opsman-pcf:~$ uaac user add pas --emails papicella@pivotal.io -p PASSWD
user account successfully added

ubuntu@opsman-pcf:~$ uaac member add pks.clusters.admin pas
success

2. Now lets login using the PKS CLI with a new admin user we created

pasapicella@pas-macbook:~$ pks login -a PKS-ENDPOINT -u pas -p PASSWD -k

API Endpoint: pks-api.pks.pas-apples.online
User: pas

3. You can test whether you have a DNS issue with a command as follows. 

Note: A test as follows determines any DNS issues you may have

pasapicella@pas-macbook:~$ nc -vz PKS-ENDPOINT 8443
found 0 associations
found 1 connections:
     1: flags=82
outif en0
src 192.168.1.111 port 62124
dst 35.189.1.209 port 8443
rank info not available
TCP aux info available

Connection to PKS-ENDPOINT port 8443 [tcp/pcsync-https] succeeded!

4. You can run a simple command to verify your connected as follows, below shows no K8's clusters exist at this stage

pasapicella@pas-macbook:~$ pks list-clusters

Name  Plan Name  UUID  Status  Action

You can use PKS CLI to create a new cluster, view clusters, resize clusters etc

pasapicella@pas-macbook:~$ pks

The Pivotal Container Service (PKS) CLI is used to create, manage, and delete Kubernetes clusters. To deploy workloads to a Kubernetes cluster created using the PKS CLI, use the Kubernetes CLI, kubectl.

Version: 1.0.0-build.3

Note: The PKS CLI is under development, and is subject to change at any time.

Usage:
  pks [command]

Available Commands:
  cluster         View the details of the cluster
  clusters        Show all clusters created with PKS
  create-cluster  Creates a kubernetes cluster, requires cluster name and an external host name
  delete-cluster  Deletes a kubernetes cluster, requires cluster name
  get-credentials Allows you to connect to a cluster and use kubectl
  help            Help about any command
  login           Login to PKS
  logout          Logs user out of the PKS API
  plans           View the preconfigured plans available
  resize          Increases the number of worker nodes for a cluster

Flags:
  -h, --help      help for pks
      --version   version for pks

Use "pks [command] --help" for more information about a command.

5. You would create a cluster as follows now you have logged in and yu will get aK8's cluster to begin working with

pasapicella@pas-macbook:~$ pks create-cluster my-cluster --external-hostname EXT-LB-HOST --plan small

Name:                     my-cluster
Plan Name:                small
UUID:                     64a086ce-c94f-4c51-95f8-5a5edb3d1476
Last Action:              CREATE
Last Action State:        in progress
Last Action Description:  Creating cluster
Kubernetes Master Host:   cluster1.FQDN
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  In Progress

Finally when done you will see "Last Action:" as "succeeded" as shown below

pasapicella@pas-macbook:~$ pks cluster my-cluster

Name:                     my-cluster
Plan Name:                small
UUID:                     64a086ce-c94f-4c51-95f8-5a5edb3d1476
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   cluster1.FQDN
Kubernetes Master Port:   8443
Worker Instances:         3
Kubernetes Master IP(s):  MASTER-IP-ADDRESS

More Info

https://docs.pivotal.io/runtimes/pks/1-0/index.html


Categories: Fusion Middleware

Pivotal Cloud Foundry Healthwatch for Pivotal Cloud Foundry 2.0 on GCP

Thu, 2018-03-15 01:29
I decided to eventually install PCF Healthwatch on my Google Cloud Platform PCF 2.0 instance. Installing it is straight forward using Ops Manager UI and once installed it will look like this.

Note: This is PCF 2.0 on GCP


Once installed the application for the Web UI end point would be as follows. The login username and password is the UAA admin user . By default the property "healthwatch.read" credential is given to this user only. You can always create a new user that has this credential role if you like.

https://healthwatch.SYSTEM-DOMAIN

The main page has various useful information and more then enough to show you what's happening in your PCF instance as shown below.



Clicking on any of the headings for each tile you can get more detailed information. The two screen shots below show some CF CLI command history tests like a "cf push" , "cf logs" and also what is happening within the Diego cells in terms of "Memory, Disk and the Containers" themselves.




More Information

https://docs.pivotal.io/pcf-healthwatch/1-1/index.html
Categories: Fusion Middleware

Just gave CFDEV a quick test and it's easy and includes BOSH!!!!

Sun, 2018-03-11 01:57
CF Dev is a new distribution of Cloud Foundry designed to run on a developer’s laptop or workstation using native hypervisors and a fully functional BOSH Director.

I decided to give it a test run today and it's fast and easy full CF experience deployed through the CF CLI plugin as described in the GitHub project

  https://github.com/pivotal-cf/cfdev

Here we run some bosh commands once it's up and running. You can't run BOSH commands without first setting your ENV to use the correct bosh director which you do as follows

  $ eval "$(cf dev bosh env)"

pasapicella@pas-macbook:~/apps/ENV/cfdev$ bosh deployments
Using environment '10.245.0.2' as client 'admin'

Name  Release(s)                    Stemcell(s)                                          Team(s)  Cloud Config
cf    binary-buildpack/1.0.15       bosh-warden-boshlite-ubuntu-trusty-go_agent/3468.17  -        latest
      bosh-dns/0.2.0
      capi/1.46.0
      cf-mysql/36.10.0
      cf-networking/1.9.0
      cf-smoke-tests/40
      cf-syslog-drain/5
      cflinuxfs2/1.179.0
      consul/191
      diego/1.32.1
      dotnet-core-buildpack/1.0.32
      garden-runc/1.10.0
      go-buildpack/1.8.15
      grootfs/0.30.0
      java-buildpack/4.7.1
      loggregator/99
      nats/22
      nodejs-buildpack/1.6.13
      php-buildpack/4.3.46
      python-buildpack/1.6.4
      routing/0.169.0
      ruby-buildpack/1.7.8
      staticfile-buildpack/1.4.20
      statsd-injector/1.0.30
      uaa/53.3

1 deployments

Succeeded

pasapicella@pas-macbook:~/apps/ENV/cfdev$ bosh stemcells
Using environment '10.245.0.2' as client 'admin'

Name                                         Version   OS             CPI  CID
bosh-warden-boshlite-ubuntu-trusty-go_agent  3468.17*  ubuntu-trusty  -    54a8d4c1-5a02-4d89-5648-1132914a0cb8

(*) Currently deployed

1 stemcells

Succeeded

You can simply use the CF CLI as follows once you target the correct API endpoint and login as follows

pasapicella@pas-macbook:~/apps/ENV/cfdev$ cf api https://api.v3.pcfdev.io --skip-ssl-validation
Setting api endpoint to https://api.v3.pcfdev.io...
OK

api endpoint:   https://api.v3.pcfdev.io
api version:    2.100.0
Not logged in. Use 'cf login' to log in.

and to log in ...

pasapicella@pas-macbook:~/apps/ENV/cfdev$ cf login -o cfdev-org -u admin -p admin
API endpoint: https://api.v3.pcfdev.io
Authenticating...
OK

Targeted org cfdev-org

Targeted space cfdev-space

API endpoint:   https://api.v3.pcfdev.io (API version: 2.100.0)
User:           admin
Org:            cfdev-org
Space:          cfdev-space
Categories: Fusion Middleware

Spring boot 2 Actuator Support and Pivotal Cloud Foundry 2.0

Sun, 2018-03-04 04:29
With Spring Boot Actuator you get production-ready features to your application. The main benefit of this library is that we can get production grade tools without having to actually implement these features ourselves.

Actuator is mainly used to expose operational information about the running application – health, metrics, info, dump, env, etc. It uses HTTP endpoints or JMX beans to enable us to interact with it.

In this post we will show how Spring Boot 2.0 Actuator endpoints are automatically integrated into Pivotal Cloud Foundry Apps Manager.

1. Clone the following project as shown below

pasapicella@pas-macbook:~/temp$ git clone https://github.com/papicella/springboot-actuator-2-demo.git
Cloning into 'springboot-actuator-2-demo'...
remote: Counting objects: 57, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 57 (delta 0), reused 6 (delta 0), pack-reused 48
Unpacking objects: 100% (57/57), done.

2. Package as follows

pasapicella@pas-macbook:~/temp/springboot-actuator-2-demo$ mvn package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building springboot-autuator-2-demo 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:3.0.1:resources (default-resources) @ springboot-autuator-2-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource

...

[INFO]
[INFO] --- maven-jar-plugin:3.0.2:jar (default-jar) @ springboot-autuator-2-demo ---
[INFO] Building jar: /Users/pasapicella/temp/springboot-actuator-2-demo/target/springboot-autuator-2-demo-0.0.1-SNAPSHOT.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:2.0.0.M7:repackage (default) @ springboot-autuator-2-demo ---
[INFO]
[INFO] --- maven-dependency-plugin:3.0.1:unpack (unpack) @ springboot-autuator-2-demo ---
[INFO] Configured Artifact: com.example:springboot-autuator-2-demo:0.0.1-SNAPSHOT:jar
[INFO] Unpacking /Users/pasapicella/temp/springboot-actuator-2-demo/target/springboot-autuator-2-demo-0.0.1-SNAPSHOT.jar to /Users/pasapicella/temp/springboot-actuator-2-demo/target/dependency with includes "" and excludes ""
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.650 s
[INFO] Finished at: 2018-03-04T21:04:08+11:00
[INFO] Final Memory: 46M/594M
[INFO] ------------------------------------------------------------------------

3. Deploy as follows

pasapicella@pas-macbook:~/temp/springboot-actuator-2-demo$ cf push
Pushing from manifest to org apples-pivotal-org / space development as papicella@pivotal.io...
Using manifest file /Users/pasapicella/temp/springboot-actuator-2-demo/manifest.yml
Getting app info...
Updating app with these attributes...
  name:                springboot-actuator-appsmanager
  path:                /Users/pasapicella/temp/springboot-actuator-2-demo/target/springboot-autuator-2-demo-0.0.1-SNAPSHOT.jar
  buildpack:           client-certificate-mapper=1.5.0_RELEASE container-security-provider=1.13.0_RELEASE java-buildpack=v4.9-offline-https://github.com/cloudfoundry/java-buildpack.git#830f4c3 java-main java-opts java-security jvmkill-agent=1.12.0_RELEASE open-jdk-l...
  command:             JAVA_OPTS="-agentpath:$PWD/.java-buildpack/open_jdk_jre/bin/jvmkill-1.12.0_RELEASE=printHeapHistogram=1 -Djava.io.tmpdir=$TMPDIR -Djava.ext.dirs=$PWD/.java-buildpack/container_security_provider:$PWD/.java-buildpack/open_jdk_jre/lib/ext -Djava.security.properties=$PWD/.java-buildpack/java_security/java.security $JAVA_OPTS" && CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.10.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=250 -loadedClasses=17785 -poolType=metaspace -vmOptions="$JAVA_OPTS") && echo JVM Memory Configuration: $CALCULATED_MEMORY && JAVA_OPTS="$JAVA_OPTS $CALCULATED_MEMORY" && MALLOC_ARENA_MAX=2 SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher
  disk quota:          1G
  health check type:   port
  instances:           1
  memory:              1G
  stack:               cflinuxfs2
  routes:
    springboot-actuator-appsmanager-forgiving-camel.cfapps.io

Updating app springboot-actuator-appsmanager...
Mapping routes...
Comparing local files to remote cache...
Packaging files to upload...
Uploading files...

...

Waiting for app to start...

name:              springboot-actuator-appsmanager
requested state:   started
instances:         1/1
usage:             1G x 1 instances
routes:            springboot-actuator-appsmanager-forgiving-camel.cfapps.io
last uploaded:     Sun 04 Mar 21:07:03 AEDT 2018
stack:             cflinuxfs2
buildpack:         client-certificate-mapper=1.5.0_RELEASE container-security-provider=1.13.0_RELEASE java-buildpack=v4.9-offline-https://github.com/cloudfoundry/java-buildpack.git#830f4c3
                   java-main java-opts java-security jvmkill-agent=1.12.0_RELEASE open-jdk-l...
start command:     JAVA_OPTS="-agentpath:$PWD/.java-buildpack/open_jdk_jre/bin/jvmkill-1.12.0_RELEASE=printHeapHistogram=1 -Djava.io.tmpdir=$TMPDIR
                   -Djava.ext.dirs=$PWD/.java-buildpack/container_security_provider:$PWD/.java-buildpack/open_jdk_jre/lib/ext -Djava.security.properties=$PWD/.java-buildpack/java_security/java.security
                   $JAVA_OPTS" && CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.10.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=250 -loadedClasses=17785
                   -poolType=metaspace -vmOptions="$JAVA_OPTS") && echo JVM Memory Configuration: $CALCULATED_MEMORY && JAVA_OPTS="$JAVA_OPTS $CALCULATED_MEMORY" && MALLOC_ARENA_MAX=2 SERVER_PORT=$PORT
                   eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.JarLauncher

     state     since                  cpu      memory         disk           details
#0   running   2018-03-04T10:08:16Z   196.2%   385.7M of 1G   157.4M of 1G

4. The application.yml exposes all methods and is totally unsecure so you would not want to do this in a production application. The application is deployed using an application.yml as follows.

spring:
  application:
    name: PCFSpringBootActuatorDemo
  jpa:
    hibernate:
      ddl-auto: update
management:
  endpoint:
    health:
      show-details: true
  endpoints:
    web:
      expose: '*'
      enabled: true
    jmx:
      expose: '*'
      enabled: true

Once deployed Pivotal Cloud Foundry Apps Manager will show the Spring Icon and use the Actuator endpoints.





Lets invoke some of the Actuator endpoints using HTTPIE or CURL if you like. Remember we have exposed all web endpoints allowing us to do this. One thing that has changed form Actuator 1.x to 2.0 is the endpoints are now mapped to /actuator out of the box. You can get all that are available endpoints just by invoking /actuator as shown below using a GET RESTful call.

pasapicella@pas-macbook:~/temp/springboot-actuator-2-demo$ http http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Date: Sun, 04 Mar 2018 10:13:38 GMT
X-Vcap-Request-Id: 22116a58-f689-4bd9-448c-023bae2ed5ec
transfer-encoding: chunked

{
    "_links": {
        "auditevents": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/auditevents",
            "templated": false
        },
        "beans": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/beans",
            "templated": false
        },
        "conditions": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/conditions",
            "templated": false
        },
        "configprops": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/configprops",
            "templated": false
        },
        "env": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/env",
            "templated": false
        },
        "env-toMatch": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/env/{toMatch}",
            "templated": true
        },
        "health": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/health",
            "templated": false
        },
        "heapdump": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/heapdump",
            "templated": false
        },
        "info": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/info",
            "templated": false
        },
        "loggers": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/loggers",
            "templated": false
        },
        "loggers-name": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/loggers/{name}",
            "templated": true
        },
        "mappings": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/mappings",
            "templated": false
        },
        "metrics": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/metrics",
            "templated": false
        },
        "metrics-requiredMetricName": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/metrics/{requiredMetricName}",
            "templated": true
        },
        "scheduledtasks": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/scheduledtasks",
            "templated": false
        },
        "self": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator",
            "templated": false
        },
        "threaddump": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/threaddump",
            "templated": false
        },
        "trace": {
            "href": "http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/trace",
            "templated": false
        }
    }
}

pasapicella@pas-macbook:~/temp/springboot-actuator-2-demo$ http http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/health
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 183
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Date: Sun, 04 Mar 2018 10:16:03 GMT
X-Vcap-Request-Id: be45c751-b77d-4e7c-77b6-0d7affa0fe16

{
    "details": {
        "db": {
            "details": {
                "database": "H2",
                "hello": 1
            },
            "status": "UP"
        },
        "diskSpace": {
            "details": {
                "free": 908681216,
                "threshold": 10485760,
                "total": 1073741824
            },
            "status": "UP"
        }
    },
    "status": "UP"
}

pasapicella@pas-macbook:~/temp/springboot-actuator-2-demo$ http http://springboot-actuator-appsmanager-forgiving-camel.cfapps.io/actuator/trace
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: application/vnd.spring-boot.actuator.v2+json;charset=UTF-8
Date: Sun, 04 Mar 2018 10:19:46 GMT
X-Vcap-Request-Id: e0c86f51-dcc4-4349-41d2-6b603677c3f4
transfer-encoding: chunked

{
    "traces": [
        {
            "info": {
                "headers": {
                    "request": {
                        "accept": "*/*",
                        "accept-encoding": "gzip, deflate",
                        "host": "springboot-actuator-appsmanager-forgiving-camel.cfapps.io",
                        "user-agent": "HTTPie/0.9.9",
                        "x-b3-spanid": "0ad427a9f13bad0c",
                        "x-b3-traceid": "0ad427a9f13bad0c",
                        "x-cf-applicationid": "c1e50a41-5e1e-475f-b9e6-116a7acd98a2",
                        "x-cf-instanceid": "db74a5d2-ac72-4c45-539a-118f",
                        "x-cf-instanceindex": "0",
                        "x-forwarded-port": "80",
                        "x-forwarded-proto": "http",
                        "x-request-start": "1520158694338",
                        "x-vcap-request-id": "5f1e5572-a841-4e3f-4b6f-2cfd0c0ccc8e"
                    },
                    "response": {
                        "Content-Type": "application/vnd.spring-boot.actuator.v2+json;charset=UTF-8",
                        "Date": "Sun, 04 Mar 2018 10:18:14 GMT",

...

More Information

https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-endpoints.html
Categories: Fusion Middleware

Pivotal Cloud Foundry App Instance Routing in HTTP Headers

Thu, 2018-01-18 04:26
Developers who want to obtain debug data for a specific instance of an app can use the HTTP header X-CF-APP-INSTANCE to make a request to an app instance. To demonstrate how we can write a Spring Boot application which simply outputs the current CF app index so we are sure we are hitting the right application container.

Simplest way to do that is to define a RestController using Spring Boot as follows which then enables us to get the current application index and verify we are hitting the right container instance.
  
package com.example.pas;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class DemoRest
{
private final String ip;
private final String index;

@Autowired
public DemoRest
(@Value("${CF_INSTANCE_IP:127.0.0.1}") String ip,
@Value("${CF_INSTANCE_INDEX:0}") String index) {
this.ip = ip;
this.index = index;
}

@RequestMapping("/")
public InstanceDetail getAppDetails()
{
InstanceDetail instanceDetail = new InstanceDetail(ip, index);

return instanceDetail;
}
}

So with the application deployed as see we have 3 instances as follows

pasapicella@pas-macbook:~$ cf app pas-pcf-routingdemo
Showing health and status for app pas-pcf-routingdemo in org pivot-papicella / space dotnet as papicella@pivotal.io...

name:                pas-pcf-routingdemo
requested state:     started
instances:           3/3
isolation segment:   main
usage:               756M x 3 instances
routes:              pas-pcf-routingdemo-incidental-curia.pcfbeta.io
last uploaded:       Thu 18 Jan 20:41:26 AEDT 2018
stack:               cflinuxfs2
buildpack:           client-certificate-mapper=1.4.0_RELEASE container-security-provider=1.11.0_RELEASE java-buildpack=v4.7.1-offline-https://github.com/cloudfoundry/java-buildpack.git#6a3361a
                     java-main java-opts java-security jvmkill-agent=1... (no decorators apply)

     state     since                  cpu    memory           disk           details
#0   running   2018-01-18T09:44:07Z   0.4%   224.8M of 756M   137.5M of 1G
#1   running   2018-01-18T09:44:13Z   0.8%   205M of 756M     137.5M of 1G
#2   running   2018-01-18T09:44:06Z   0.7%   221.1M of 756M   137.5M of 1G

Now lets simply access our application a few times using the "/" end point and verify we are accessing different application containers via round robin routing as per GoRouter

pasapicella@pas-macbook:~$ http https://pas-pcf-routingdemo-incidental-curia.pcfbeta.io/
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 09:58:10 GMT
Set-Cookie: dtCookie=6$B570EBB532CD9D8DAA2BCAE14C4277FC|RUM+Default+Application|1; Domain=pcfbeta.io; Path=/
X-Vcap-Request-Id: 336ba633-685b-4235-467d-b9833a9e6435

{
    "index": "2",
    "ip": "192.168.16.34"
}

pasapicella@pas-macbook:~$ http https://pas-pcf-routingdemo-incidental-curia.pcfbeta.io/
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 09:58:15 GMT
Set-Cookie: dtCookie=5$3389B3DFBAD936D68CBAF30657653465|RUM+Default+Application|1; Domain=pcfbeta.io; Path=/
X-Vcap-Request-Id: aa74e093-9031-4df5-73a5-bc9f1741a942

{
    "index": "1",
    "ip": "192.168.16.32"
}

Now we can request access to just the container with application index "1" as follows

1. First get the Application GUID as shown below

pasapicella@pas-macbook:~$ cf app pas-pcf-routingdemo --guid
5bdf2f08-34a5-402f-b7cb-f29c81d171e0

2. Now lets invoke a call to the application and set the HEADER required to instruct GoRouter to target a specific application index

eg: curl app.example.com -H "X-CF-APP-INSTANCE":"YOUR-APP-GUID:YOUR-INSTANCE-INDEX"

Example below is using HTTPie 

Accessing Instance 1

pasapicella@pas-macbook:~$ http https://pas-pcf-routingdemo-incidental-curia.pcfbeta.io/ "X-CF-APP-INSTANCE":"5bdf2f08-34a5-402f-b7cb-f29c81d171e0:1"
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 10:20:31 GMT
Set-Cookie: dtCookie=5$FD08A5C88469AF379C8AD3F36FA7984B|RUM+Default+Application|1; Domain=pcfbeta.io; Path=/
X-Vcap-Request-Id: cb19b960-713a-49d0-4529-a0766a8880a7

{
    "index": "1",
    "ip": "192.168.16.32"
}

Accessing Instance 2 

pasapicella@pas-macbook:~$ http https://pas-pcf-routingdemo-incidental-curia.pcfbeta.io/ "X-CF-APP-INSTANCE":"5bdf2f08-34a5-402f-b7cb-f29c81d171e0:2"
HTTP/1.1 200 OK
Content-Length: 34
Content-Type: application/json;charset=UTF-8
Date: Thu, 18 Jan 2018 10:21:09 GMT
Set-Cookie: dtCookie=7$53957A744D473BB024EB1FF4F0CD60A9|RUM+Default+Application|1; Domain=pcfbeta.io; Path=/
X-Vcap-Request-Id: 33cc7922-9b43-4182-5c36-13eee42a9919

{
    "index": "2",
    "ip": "192.168.16.34"
}

More Information

https://docs.cloudfoundry.org/concepts/http-routing.html#http-headers
Categories: Fusion Middleware

Verifying PCF 2.0 with PAS small footprint with bosh CLI

Thu, 2017-12-28 22:27
After installing PCF 2.0 here is how you can verify your installation using the new bosh2 CLI. In this example I use "bosh2" BUT with PCF 2.0 you can actually use "bosh". bosh2 v2 existed for a while in PCF 1.12 and some previous versions while we left bosh v1

1. SSH into your ops manager VM as shown below, in this example we using GCP

https://docs.pivotal.io/pivotalcf/2-0/customizing/trouble-advanced.html#ssh

2. Create an alias for your ENV as shown below

Note: You will need the bosh director IP address which you can obtain using

  https://docs.pivotal.io/pivotalcf/2-0/customizing/trouble-advanced.html#gather

ubuntu@opsman-pcf:~$ bosh2 alias-env gcp -e y.y.y.y --ca-cert /var/tempest/workspaces/default/root_ca_certificate
Using environment 'y.y.y.y' as anonymous user

Name      p-bosh
UUID      3c886290-144f-4ec7-86dd-b7586b98dc3b
Version   264.4.0 (00000000)
CPI       google_cpi
Features  compiled_package_cache: disabled
          config_server: enabled
          dns: disabled
          snapshots: disabled
User      (not logged in)

Succeeded

3. Log in to the BOSH Director with UAA

Note: You will need the username / password for the bosh director which you can obtain as follows

  https://docs.pivotal.io/pivotalcf/2-0/customizing/trouble-advanced.html#gather

ubuntu@opsman-pcf:~$ bosh2 -e gcp log-in
Email (): director
Password ():

Successfully authenticated with UAA

Succeeded

4. View all the VM's managed by BOSH as follows

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp vms --column=Instance --column="Process State" --column=AZ --column="VM Type"
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Task 65. Done

Deployment 'cf-adee3657c74c7b9a8e35'

Instance                                             Process State  AZ                      VM Type
backup-prepare/996340c7-4114-472e-b660-a5e353493fa4  running        australia-southeast1-a  micro
blobstore/cdd6fc8d-25c9-4cfb-9908-89eb0164fb80       running        australia-southeast1-a  medium.mem
compute/2dfcc046-c16a-4a36-9170-ef70d1881818         running        australia-southeast1-a  xlarge.disk
control/2f3d0bc6-9a2d-4c08-9ccc-a88bad6382a3         running        australia-southeast1-a  xlarge
database/da60f0e7-b8e3-4f8d-945d-306b267ac161        running        australia-southeast1-a  large.disk
mysql_monitor/a88331c4-1659-4fe4-b8e9-89ce4bf092fd   running        australia-southeast1-a  micro
router/276e308e-a476-4c8d-9555-21623dada492          running        australia-southeast1-a  micro

7 vms

Succeeded

** Few other examples **

- View all the deployments, in this example we just have the PAS small footprint tile installed so it only exists and no other bosh managed tiles xist

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp deployments --column=name
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Name
cf-adee3657c74c7b9a8e35

1 deployments

Succeeded

- Run cloud check to check for issues

ubuntu@opsman-pcf:~/scripts$ bosh2 -e gcp -d cf-adee3657c74c7b9a8e35 cloud-check
Using environment 'y.y.y.y' as user 'director' (bosh.*.read, openid, bosh.*.admin, bosh.read, bosh.admin)

Using deployment 'cf-adee3657c74c7b9a8e35'

Task 66

Task 66 | 04:20:52 | Scanning 7 VMs: Checking VM states (00:00:06)
Task 66 | 04:20:58 | Scanning 7 VMs: 7 OK, 0 unresponsive, 0 missing, 0 unbound (00:00:00)
Task 66 | 04:20:58 | Scanning 3 persistent disks: Looking for inactive disks (00:00:01)
Task 66 | 04:20:59 | Scanning 3 persistent disks: 3 OK, 0 missing, 0 inactive, 0 mount-info mismatch (00:00:00)

Task 66 Started  Fri Dec 29 04:20:52 UTC 2017
Task 66 Finished Fri Dec 29 04:20:59 UTC 2017
Task 66 Duration 00:00:07
Task 66 done

#  Type  Description

0 problems

Succeeded

More Information

https://docs.pivotal.io/pivotalcf/2-0/customizing/trouble-advanced.html
Categories: Fusion Middleware

Terminating a specific application instance using it's index number in Pivotal Cloud Foundry

Tue, 2017-12-19 03:54
I was recently asked how to terminate a specific application instance rather then terminate all instances using "cf delete".

We can easily using the CF REST API or even easier the CF CLI "cf curl" command which makes it straight forward to make REST based calls into cloud foundry as shown below.

CF REST API Docs

https://apidocs.cloudfoundry.org/280/

Below assumes you already logged into PCF using the CF CLI

1. First find an application that has multiple instances

pasapicella@pas-macbook:~$ cf app pas-cf-manifest
Showing health and status for app pas-cf-manifest in org apples-pivotal-org / space development as papicella@pivotal.io...

name:              pas-cf-manifest
requested state:   started
instances:         2/2
usage:             756M x 2 instances
routes:            pas-cf-manifest.cfapps.io
last uploaded:     Sun 19 Nov 21:26:26 AEDT 2017
stack:             cflinuxfs2
buildpack:         client-certificate-mapper=1.2.0_RELEASE container-security-provider=1.8.0_RELEASE java-buildpack=v4.5-offline-https://github.com/cloudfoundry/java-buildpack.git#ffeefb9 java-main
                   java-opts jvmkill-agent=1.10.0_RELEASE open-jdk-like-jre=1.8.0_1...

     state     since                  cpu    memory           disk           details
#0   running   2017-12-16T00:11:27Z   0.0%   241.5M of 756M   139.9M of 1G
#1   running   2017-12-17T10:39:09Z   0.3%   221.3M of 756M   139.9M of 1G

2. Use a "cf curl" curl which uses the application GUID to determine which application to check all application instances and their current state

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
{
   "0": {
      "state": "RUNNING",
      "uptime": 293653,
      "since": 1513383087
   },
   "1": {
      "state": "RUNNING",
      "uptime": 169591,
      "since": 1513507149
   }
}

3. Now let's delete instance with index "1". Don't forget that PCF will determine the current desired state of the application is not the current state and will re-start the application instance very quickly

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances/1 -X DELETE

Note: You won't get any output BUT you can verify it has done what you asked for by running the command at step #2 again

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
{
   "0": {
      "state": "RUNNING",
      "uptime": 293852,
      "since": 1513383087
   },
   "1": {
      "state": "DOWN",
      "uptime": 0
   }
}

If you run it again say 30 seconds later you should see your application instance re-started as shown below

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
{
   "0": {
      "state": "RUNNING",
      "uptime": 293870,
      "since": 1513383087
   },
   "1": {
      "state": "STARTING",
      "uptime": 11,
      "since": 1513676947
   }
}

pasapicella@pas-macbook:~$ cf curl /v2/apps/`cf app pas-cf-manifest --guid`/instances
{
   "0": {
      "state": "RUNNING",
      "uptime": 293924,
      "since": 1513383087
   },
   "1": {
      "state": "RUNNING",
      "uptime": 45,
      "since": 1513676965
   }
}

More Information

pasapicella@pas-macbook:~$ cf curl --help
NAME:
   curl - Executes a request to the targeted API endpoint

USAGE:
   cf curl PATH [-iv] [-X METHOD] [-H HEADER] [-d DATA] [--output FILE]

   By default 'cf curl' will perform a GET to the specified PATH. If data
   is provided via -d, a POST will be performed instead, and the Content-Type
   will be set to application/json. You may override headers with -H and the
   request method with -X.

   For API documentation, please visit http://apidocs.cloudfoundry.org.

EXAMPLES:
   cf curl "/v2/apps" -X GET -H "Content-Type: application/x-www-form-urlencoded" -d 'q=name:myapp'
   cf curl "/v2/apps" -d @/path/to/file

OPTIONS:
   -H            Custom headers to include in the request, flag can be specified multiple times
   -X            HTTP method (GET,POST,PUT,DELETE,etc)
   -d            HTTP data to include in the request body, or '@' followed by a file name to read the data from
   -i            Include response headers in the output
   --output      Write curl body to FILE instead of stdout
Categories: Fusion Middleware

Pages