OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 1 week 1 day ago

Latest Blog Posts from Oracle ACEs: March 31 - April 6, 2019

Thu, 2019-04-11 13:07

From time to time we all could use a little help getting all the pieces to fit. This collection of the latest technical blog posts from members of the Oracle ACE program from around the world might be exactly what you need to solve a persistent problem or complete a project. Or, at the very least, somewhere in this list of 21 posts from 14 ACEs you just might pick up a tip or two that will help you to avoid a migraine-inducing situation.

Get it together.

Oracle ACE Director Franck PanchotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

 

Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland

 

 

Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA
Minas Gerais, Brazil

 

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

 

Oracle ACE Marcel HofstettMarcel Hofstetter
CEO/Enterprise Consultant, JomaSoft
St. Gallen, Switzerland

 

 

Oracle ACE Marko MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany

 

 

Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Hewlett-Packard Japan, Ltd.
Tokyo, Japan

 

 

Oracle ACE Peter ScottPeter Scott
Principal and Owner, Sandwich Analytics
Pays de la Loire, France

 

 

Oracle ACE Scott WesleyScott Wesley
Systems Consultant/Trainer, SAGE Computing Services
Perth, Australia

 

 

Oracle ACE Zaheer SyedZaheer Syed
Oracle Application Specialist, Tabadul
Saudi Arabia
 

 

 

 

Oracle ACE Yong JingYong Jing
System Architect Manager, Changde Municipal Human Resources and Social Security Bureau
Changde City, China
 

 

 

Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia

 

 

Oracle ACE Associate Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 

 

Additional Resources

Latest ACE Technical Articles: March 2019

Tue, 2019-04-09 11:05

What does is take to spend countless hours hunched over a keyboard pounding out code, only to hunch yet again to write a technical article? Beyond the necessary technical skill and expertise, my guess is that it also takes massive quantities of coffee. I have no hard evidence to support that theory other than the long lines at any Starbucks within range of any developer conference. I didn't ask these ACEs how much coffee they consumed as they wrote these articles. They may not drink coffee at all. But the articles listed below are clear evidence that these fine people had the energy and the inclination to transfer their expertise and experience onto the page were you can absorb it.

So pour yourself a cup of whatever keeps you going and soak up some of what these ACEs are serving,

Oracle ACE Director Nassayam BashaNassyam Basha
Database Expert, eProseed
Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
Author, Speaker, Oracle Evangelist, Award winning DBA
Oracle ACE David FitzjarrellDavid Fitzjarrell
Oracle Database Administrator, Pinnacol Assurance
Oracle ACE Michael GanglerMichael Gangler
Database Expert, eProseed
Oracle ACE Asscociate Jian JiangJian Jiang
Yunqu Tech
  Oracle ACE Associate Bin HongBin Hong
Senior MySQL DBA, Shanghai Action Information Technology Co., Ltd.
Additional Resources

The Power of Functional Programming

Mon, 2019-04-08 02:36

Oracle has added support to serverless computing on the cloud that enables developers to leverage programming languages that support Functional programming like Kotlin. Oracle Functions can be thought of as Function-as-a-Service (FaaS).

I will attempt to throw some light on Functional Programming in this blog.

What is Functional Programming?

Functional Programming paradigm can be equated to the mathematical equivalent of y = fn(x).

Mathematical definition:

A function is a process or a relation that associates each element x of a set X, the domain of the function, to a single element y of another set Y (possibly the same set), the codomain of the function.

Function

How does functions benefit programmers?

Functions have certain properties that make it favorable, especially when you want your code to seamless work in a multi threaded concurrent environment. Some of its notable properties are:

  • Functions are idempotent, that is calling a function multiple times with the same input yields the same output.
  • Functions can be chained. For example,

    Given two functions f : X → Y {\displaystyle f\colon X\to Y} f\colon X\to Y and g : Y → Z {\displaystyle g\colon Y\to Z} {\displaystyle g\colon Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f\colon X\rightarrow Z} g\circ f\colon X\rightarrow Z defined by

    ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} (g\circ f)(x)=g(f(x)).

  • Functions are associative, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal. Thus, one writes h ∘ g ∘ f = ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle h\circ g\circ f=(h\circ g)\circ f=h\circ (g\circ f).} {\displaystyle h\circ g\circ f=(h\circ g)\circ f=h\circ (g\circ f).}

These properties enforce immutability in the way the functions are written. For example, in Java streams, only variables declared as final can utilized inside the anonymous functions used in streams.This makes functions to be easily utilized by parallel streams.

Functional Programming Constructs

Traditionally functions take parameters like primitive types. However Functions in Functional Programming, can consume other functions as well. These functions are called higer-order functions. Like in Python, functions in Kotlin are first-class citizens - they can be assigned to variables and passed around as parameters. The type a function is a function type, which is indicated with a parenthesized parameter type list and an arrow to the return type. Consider this function:

fun safeDivide(numerator: Int, denominator: Int) =
    if (denominator == 0.0) 0.0 else numerator.toDouble() / denominator

It takes two Int parameters and returns a Double so its type is (Int,Int) -> Double. We can reference the function itself by prefixing its name with ::, and we can assign it to a variable (whose type would normally be inferred, but we show the type signature for demonstration):

val f: (Int, Int) -> Double = ::safeDivide

When you have a variable or parameter of function type (sometimes called a function reference), you can call it as if it were an ordinary function, and that will cause the referenced function to be called:

val quotient = f(3.14, 0.0)

Kotlin

Kotlin is an  open source, cross platform, statically typed, general purpose programming language with type inference. Kotlin is designed to be fully interoperable with Java, and the JVM version of its standard library depends on the Java Class Library,[2] but type inference allows its syntax to be more concise. Kotlin mainly targets the JVM, but also compiles to JavaScript or native code (via LLVM). Kotlin is sponsored by JetBrains and Google through the Kotlin Foundation.

Let us see a sample problem, that I have solved in traditional OOP (Object oriented programming) and see how we can solve it in FP (Functional programming).

Functional Programming in Action

Let us now see functional programming in action. I will be demonstrating functional programming using Kotlin.

Sample Problem

Suppose you need to design a system that facilitates circumnavigation of the Mars terrain (simplified to a grid). The system is given the upper-right corner coordinates (lower left being implied as 0,0). It is also given the position of the rover on the grid in the form (x, y, d) where x and y are positions on the x and y axis of the grid and d is the direction in which the rover is facing, being one of these values (N - 'North', S - 'South', E - 'East', W - 'West'). The system is also given a set of instructions that the rover should use to navigate the grid, character sequence with values being (L - turn left by 90º, R - turn right by 90º, M - move to the next grid position without changing direction).

Sample Input 

5 5

1 2 N

LMLMLMLMM

Expected Output

1 3 N

Design

You have a Rover, a Plateau (terrain) and a Main object that will call invoke and instruct the Rover.

The Main initializes the Plateau

Code - OOP

Below is my implementation of the Rover class in OOP. Available on GitHub

Code - FP

Below is the functional version of the same code.Code available on Github

Explanation

You can see how concise this code looks compared to the OOP approach used to solve the same problem here

I have used fold to reduce the the sequence of instructions to a single final position, which is the destination expected.

Fold takes an accumulator and element of list/map, combines/reduces and returns this value as accumulator to next element and so on, till the last element is completed.

Fold is similar to Reduce except that it takes and initial value, whereas Reduce copies the first element to the accumulator. Also Fold can be used when the accumulator is of a different type than the list or map.

The crux of the problem is converting the sequence of instructions given as a string to a position on the grid.

So, given instructions 'MML' tells the rover to move two spaces in which ever direction it is facing and the turn left.

  • Split the string(ins) into a Character Sequence.
  • Pass the initial position of Rover to Fold
  • For each char instruction('L', 'R' or 'M'), turn left, right or move the rover respectively.

More on functional constructs available on Kotlin - here

References

Upcoming ACE-Organized Meet-Ups, April-May 2019

Thu, 2019-04-04 06:00

Given their much smaller scale and more intimate feel, meet-ups are a great way to absorb technical knowledge and expertise without the crowds and travel that are factors in big conferences. The meet-ups listed below are scheduled over the next several weeks, and were organized by members of the Oracle ACE Program. If you're in the neighborhood, stop in!

Event Organizer
Oracle ACE Wataru Morohashi
Wataru Morohashi Topic: Oracle Database入学式 2019 Date: Wednesday, April 17, 2019
19:00-20:30 Organization: Japan Oracle User Group (JPOUG) Location: NTT Data Advanced Technology Co., Ltd. Collaboration Room 
1-15-7 Tsukishima, Chuo-ku, Tokyo Pacific Marks Tsukishima 1F Description: This is a seminar-style event, presented by Oracle ACE Ryota Watanabe, is intended for those who are new to Oracle Database and want to learn the basics. 

 

Event Organizer
Oracle ACE Patrick Barel
Patrick Barel Topic: Getting started with JavaScript for PL / SQL and APEX Developers Date: Friday, May 3, 2019
8:30 am - 6:00 pm Organization: Qualogy Location: NBC Congrescentrum, Blokhoeve 1, 3438 LC, Nieuwegein or  
Oracle Netherlands, Hertogswetering 163-167, 3543 AS, Utrecht  Description: For SQL and PL / SQL developers, there is no more powerful framework than Oracle Application Express (APEX). But at the end of the day, APEX creates web apps and JavaScript programs the web. The role of JavaScript in APEX is constantly increasing. Both for the makers of APEX and the users (developers). In this special hands-on workshop, Dan McGhan takes participants into the use of JavaScript in APEX. 

 

Event Organizer
Oracle ACE Christian Pfundtner
Christian Pfundtner Topic: Oracle Multitenant Database is Inevitable - Let's Go For It! Date: Friday May 17, 2019
8:30am -12:30pm Organization: DBMasters Location: MA01 - Veranstaltungszentrum 
1220 Vienna, Stadlauerstraße 56  Description: This talk starts with a refresher on multitenant architecture and then focuses on the administrative differences compared to the previous architecture. New upgrade options and advantages/disadvantages will also be discussed.

 

Related Content

Latest Blog Posts from Oracle ACEs: March 24-30, 2019

Tue, 2019-04-02 11:05

Doing what they do...

Technical expertise is table stakes for community members who are nominated and confirmed as members of the Oracle ACE Program. But key to that status is their enthusiasm for sharing that expertise through a variety of channels. Blogs, for instance.  Help yourself to some of that expertise with this batch of the latest Oracle ACE blog posts for the week of March 24-30, 2019.

Author Blog Post Oracle ACE Director Franck Pachot
Franck Pachot Oracle ACE Hiroshi Sekiguchi
Hiroshi Sekiguchi Oracle ACE Jhonata Lamim
Jhonata Lamim Oracle ACE Patrick Joliffe
Patrick Joliffe Oracle ACE Rodrigo Radtke de Souza
Rodrigo Radtke de Souza Oracle ACE Scott Wesley
Scott Wesley Oracle ACE Associate Alex Pagliarini
Alex Pagliarini Oracle ACE Associate Bruno Reis da Silva
Bruno Reis da Silva Oracle ACE Associate Emad Al-Mousa
Emad Al-Mousa Oracle ACE Associate Simo Vilmunen
Simo Vilmunen

 

Related Content

Oracle ACE Program: A High-Five for New Members and Category Climbers

Wed, 2019-03-27 13:40

The 25 people featured in this post have a great deal in common. Each has demonstrated substantial technical skill and knowledge about Oracle technologies across a variety of specialties and interests. But beyond that, each has shown considerable enthusiasm for sharing that expertise with the community through articles, presentations, videos, and other means of communication. Those two factors, technical skill and an enthusiasm for sharing it, are what the Oracle ACE Program is all about.

The faces looking up at you from this page belong to the latest crop of experts to earn a place at one of the three levels in that program, having been confirmed in the first quarter of calendar year 2019. Read a bit about them. Reach out to them. They're great resources.

As you head out to conferences or meet-ups over the next several months, the chances are good that you'll see some of these folks. If so, I think a congratulatory high-five is in order.

Well done, people!

 

Oracle ACE Director

 

Ron Ekins
Bolnore Village, West Sussex, England
Twitter LinkedIn 

Ron is a TOGAF-certified Enterprise Architect with over 25 years experience in the design, development, and delivery of large enterprise systems and innovative IT solutions. He first became an Oracle ACE in June 2015, and was promoted to Oracle ACE Director on January 31, 2019.

 

Oracle ACE

 

Oracle ACE Roger Cornejo

Roger Cornejo
Durham, North Carolina
Twitter LinkedIn 

Roger has over 34 years of experience with large/complex Oracle applications (versions 4.1.4 – 18c). His main focus is on DB performance analysis and tuning, and for the past 8 years, diving deep into AWR tuning data. Confirmed as an Oracle ACE on January 30, 2019.

Oracle ACE David Dai

Mingming (David) Dai
Hefei,China
Twitter LinkedIn 

David has been engaged in Oracle Database-related work for 10 years, and has gained extensive experience in high availability, database diagnosis. and performance tuning. He is a core member of ACOUG(All China Oracle User Group) and CN'SOUG (China Southern Oracle User Group ). David first became an Oracle ACE Associate in 2014, and was promoted to Oracle ACE on February 14, 2009.

Oracle ACE Jeffrey Kemp

Jeffery Kemp
Stratton, Australia
Twitter LinkedIn 

Jeffrey is an application designer and developer specializing in Oracle APEX, Oracle SQL, and PL/SQL. He has 19 years experience with the Oracle Database, including 13 years designing, building and hosting APEX applications. Confirmed as an Oracle ACE on February 14, 2019.

Oracle ACE Satoshi Mitani

Satoshi Mitani
Tokyo, Japan
Twitter LinkedIn 

Satoshi has worked at Yahoo! Japan for 14 years, the last 8 years as Database Platform Technical Lead. He has extensive experience with MySQL and is an active member of the MySQL community within the Nippon Association in Japan. Confirmed as an Oracle ACE on February 14, 2019.

Oracle ACE Borys Neselovskyi

Borys Neselovskyi
Dortmund, Germany
Twitter LinkedIn 

Borys Neselovskyi is a leading Infrastructure Architect at OPITZ CONSULTING. His work there includes the conceptual design and implementation of infrastructure solutions based on Oracle Database/Middleware/Engineered Systems/Virtualization. He also regularly works as a trainer for Oracle University. Confirmed as an Oracle ACE on February 14, 2019.

Oracle ACE Yossi Nixon

Yossi Nixon
Ramat HaSharon, Israel
Twitter LinkedIn Oracle ACE

Chief Database Architect at Axxana, Yossi has more that two decades of experience in IT infrastructure management, database design, development, and administration. He became an Oracle ACE Associate in October 2017, and made the jump to ACE on March 19, 2019.

Oracle ACE Stefan Oehrli

Stephan Oehrli
Muri, Switzerland
Twitter LinkedIn 

Stefan is a principal consultant, trainer and partner at Trivadis. He began to work with database systems in the late 1998. His main interests include physical database design, backup and recovery, container technologies, database security, database internals and everything else related to the core Oracle Database technology. Confirmed as an Oracle ACE on February 14, 2019.

Oracle ACE Manish Sharma

Manish Sharma
Dehli, India
Twitter LinkedIn 

Manish, an Oracle certified professional, is an Oracle database trainer and consultant. His Rebellion Rider YouTube channel provides Oracle Database tutorials sto over 58K subscribers. Confirmed as an Oracle ACE on February 1, 2019.

 

Oracle ACE Associate

 

Oracle ACE Associate Flora Barriele

Flora Barriele
Nyon, Switzerland
Twitter LinkedIn 

Flora has been working in IT for 8 years, including 3 years as an Oracle Database Administrator. She now focuses on Multitenant and Exadata Cloud@Customer. She volunteers for Swiss Oracle User Group to organize events, and is involved in promoting and encouraging women in technology. Confirmed as an Oracle ACE Associate on February 14, 2019.

Oracle ACE Associate Lisandro Fernigrini

Lisandro Fernigrini
Santa Fe, Argentina
Twitter LinkedIn 

A Senior Software Developer with more than 15 years of experience in Oracle Database technologies, Lisandro first got involved with Oracle Database as a DBA, then as a PL/SQL Developer, and later as a Database Architect.  He is an active member of AROUG (Argentina Oracle User Group). Confirmed as an Oracle ACE Associate on  January 17, 2019.

Oracle ACE Associate Paolo Gaggia

Paolo Gaggia
Rome, Italy
Twitter LinkedIn 

Paolo has 20 years of experience with Oracle technology, with a focus on Oracle Database architecture and troubleshooting. An expert in Oracle Database and Middleware, he currently develops architecture and solutions based on Oracle Blockchain Cloud. Confirmed as an Oracle ACE Associate on March 20, 2019.

Oracle ACE Associate Caroline Hagemann

Carolin Hagemann
Hamburg, Germany
Twitter XING LinkedIn Oracle ACE Associate Carolin Hagemann

Carolin developed her first Oracle APEX application in 2010 after years of developing web applications with PHP and MySQL. She made the decision to become an APEX consultant after attending the DOAG conference and exhibition. She organizes Meetups in Hamburg and is active in the DOAG NextGen DOAG Development communities. Confirmed as an Oracle ACE Associate on March 11, 2019.

Oracle ACE Associate Bin Hong

Hong Bin
Chengdu, China
  LinkedIn 

With more than 10 years of experience with MySQL, Hong Bin is a technical director for Shanghai Action Information Technology Co. where he specializes in Database Management/Performance. Confirmed as an Oracle ACE Associate on February 15, 2019.

Oracle ACE Associate Firoz Hussain

Firoz Hussain
Ajman, United Arab Emirates
  LinkedIn 

Firoz is a Senior Oracle Apps DBA with the Thumbay Group. His specialties include Database Management and Performance, Application and Apps Technology, and Cloud Computing. Confirmed as an Oracle ACE Associate on February 14, 2019.

Oracle ACE Associate Jian Jiang

Jian Jiang
Ningbo, China
   

In his role as a database administrator with Yunqu Tech, Jian Jiang specializes in database management performance, and is also interested in SQL tuning. Confirmed as an Oracle ACE Associate on February 20, 2019.

Oracle ACE Associate Batmunkh Moltov

Batmunkh Moltov
Ulaanbaatar, Mongolia
Twitter LinkedIn 

An Oracle Certified Master, Batmunkh has over 8 years experience with Unix Systems and Oracle Database, with expertise in Database Management and Performance, Linux, Virtualization, Open Source, and Engineered Systems. Confirmed as an Oracle ACE Associate on February 15, 2019.

Oracle ACE Associate Daniel Nelle Daniel Nelle
Leimersheim, Germany
Twitter LinkedIn 

Since Daniel first started working with Oracle Databases in 2004, databases and IT security have become his core competencies. His focus is drawn to performance tuning and to finding solutions beyond the obvious. Confirmed as an Oracle ACE Associate on February 15, 2019.

Oracle ACE Associate Alex Pagliarini Alex Pagliarini
Rio Grande do Sul, Brazil
Twitter LinkedIn

With expertise in Applications and Apps Technology, MySQL, and Database App Development, Alex has been working professionally with Oracle EBS for 8 years. Confirmed as an Oracle ACE Associate on January 30, 2019.

Oracle ACE Associate Mahmoud Rabie Mahmoud Rabie
Riyadh, Saudi Arabia
  LinkedIn 

Mahmoud is a Senior IT Solution Architect and Senior IT Trainer with over of total 17 years’ experience. An Oracle Database SQL Certified Expert and Sun Certified Java Programmer, his expertise includes database app development, Linux, Virtualization, and Open Source. Confirmed as an Oracle ACE Associate on February 14, 2019.

Oracle ACE Associate Tomito Masahiro Masahiro Tomita
Nagano, Japan
Twitter Oracle ACE Associate

Masahiro works at Fujitsu Cloud Technologies, where he specializes in MySQL, Linux, Virtualization and Open Source. He is a representative of the Japan MySQL User group. Confirmed as an Oracle ACE Associate March 11, 2019.

Oracle ACE Associate Elisa Usai Elisa Usai
Pully, Switzerland
Twitter LinkedIn 

Elisa has  more than 10 years of experience in IT, with expertise that includes MySQL, Oracle technologies, and monitoring solutions. Active in the Oracle community, she is a member of the ITOUG board and regularly speaks at conferences and events. Confirmed as an Oracle ACE Associate on February 15, 2019.

Oracle ACE Associate Simo Vilmunen Simo Vilmunen
Toronto, Canada
Twitter LinkedIn 

Simo, a Technical Architect at Uponor Business Solutions, has worked with Oracle databases since 2000 and with Oracle Applications since 2004. His current focus is on using Oracle Cloud Infrastructure (OCI) functionality to modernize solutions like Oracle Applications by automation, scaling and infrastructure as code. Confirmed as an Oracle ACE Associate on February 15, 2019.

Oracle ACE Associate Shengdong Zhang Shengdong Zhang
Beijing, China
   Oracle ACE Associate

Shengdong Zhang has been working with Oracle database since 2010, and has extensive experience in database backup and recovery, monitoring, troubleshooting,performance tuning and architecture design.  Confirmed as an Oracle ACE Associate on March 20, 2019.

Oracle ACE Associate Chenxi Zhang Chenxi Zhang
Zhejiang, China

Chenxi Zhang is the Technical Manager for a domestic insurance company in China, and he is the co-founder of CN'SOUG (China Southern Oracle User Group), the largest Oracle User Group in southern China. His expertise is in Database Management/Performance, Middleware/SOA, Linux, Virtualization and Open Source. Confirmed as an Oracle ACE Associate on January 7, 2019.

About the Oracle ACE Program

Recognized for their technical expertise, Oracle ACEs contribute knowledge with articles, technical advice, blog posts, presentations, and tweets. Join the community and learn from their insights and experience. Learn more.

Related Resources

 

 

Getting Started With Git On Oracle Cloud

Mon, 2019-03-25 12:47

The new and improved Oracle Marketplace is now available from within the Oracle Cloud Infrastructure console.  The marketplace contains several applications that developers commonly use in their projects; things like source control, bug tracking and CI/CD applications - with more being added all the time.  The best part about the marketplace is that it gives you the ability to launch instances running these tools with one click.  Let's take a look at how to launch one of these instances using something that nearly every software project uses - source control.  More specifically, the current most popular source control system: Git.

To get started with git, head to your Oracle Cloud console and select Marketplace from the left sidebar menu:

From the Marketplace, select 'GitLab CE Certified by Bitnami':

On the following page, click 'Launch Instance':

Choose the image and compartment, review and accept the terms, then click 'Launch Instance':

The next page should look familiar to you if you have previously launched an instance on Oracle Cloud.  Enter your instance name, choose your options related to the instance shape and make necessary networking selections.  Be sure to upload an SSH key, we'll need it later on.  When you're satisfied, click 'Create':

You'll be taken next to the instance details page while the instance is provisioned.  

While the instance provisions, double check that the subnet you have chosen has the proper security and route table rules to allow the instance to be web accessible.  From the sidebar, select 'Networking' then 'Virtual Cloud Networks':

From the Virtual Cloud Networks landing page, select the VCN you chose when creating the network, then from the following page locate the subnet that you chose.  Here you'll be able to navigate directly to the proper rules that you will need to verify or create:

First verify (or create) a route table rule that targets your internet gateway for all incoming traffic:

Then make sure the security list allows ports 80 and 443 for all incoming traffic (please ensure that this subnet is not associated with any instances that you do not want to expose to the web):

By now your GitLab instance should be fully provisioned.  Head to the instance details page (Compute -> Instances in the left sidebar) and view the details for your GitLab instance.  Take note of the public IP address:

Click on 'Console Connections' in the left menu, then 'Create Console Connection' and populate the dialog with the SSH key you used when creating the instance and click 'Create Console Connection':

Now you should be able to hit your running GitLab administrator via your browser at http://<public ip>:

The default username is 'root'.  To find your initial password, SSH into the instance using the following command:

ssh bitnami@<public ip> -i /path/to/ssh_key

The initial password is stored in a file called 'bitnami_credentials'. To view it:

cat ./bitnami_credentials

Which will look similar to:

Log in and get started working with GitLab on Oracle Cloud!

ACE Program Members Deliver Sessions at Collaborate 2019 in San Antonio

Mon, 2019-03-25 12:46

A small army of experts will present more than 1000 sessions at Collaborate 2019, April 7-11, 2019 in San Antonio, TX, (the home of the historic Alamo). As the list below shows, members of the Oracle ACE Program are well represented among those delivering sessions. Each of the listed session titles links to specific time, date, and location information. That should help as you're putting together your agenda for the event.

Collaborate 2019
April 7-11, 2019
Henry B. González Convention Center
San Antonio, TX USA
Information

Presenter Session Date

Oracle ACE Ahmed Aboulnaga
Ahmed Aboulnaga

Extract Partial Data from ASO Cube of Oracle Planning and Budgeting Cloud in a Readable Format 4/10/2019

Oracle ACE Ahmed Alomari
Ahmed Alomari

Taming the OACore JVM 4/10/2019 Applications Database Tuning Panel 4/8/2019

Oracle ACE Director Biju Thomas
Biju Thomas

Let's talk AI, ML, and DL 4/8/2019 Practical Usage of ORAchk and DBSAT for E-Business Suite Applications 4/9/2019 Eighteen (18) Database New Features you must See (c) 4/11/2019

Oracle ACE Bill Dunham
Bill Dunham

Workshop: EBS Upgrades StreetSmarts: A Guide to Executing Oracle EBS 12.2.x Upgrades 4/7/2019 To Cloud or Not to Cloud: That is the Question! 4/8/2019 OAUG Customizations & Alternatives Special Interest Group 4/9/2019 R12.2 Happy Hour - San Antonio Style! 4/9/2019 Flying Right Through the Clouds: Lifting and Shifting Oracle EBS On-Premise to MS Azure Cloud 4/10/2019 Let the Excitement Continue: Meet the New & Modern 12.2 EBS! 4/11/2019

Oracle ACE Chris Couture
Chris Couture

Developing a Cohesive User Experience 4/9/2019 What's Possible with PeopleSoft Chatbots 4/10/2019

Oracle ACE Director Dan Vlamis
Dan Vlamis

Getting from Answers/Dashboards to Data Visualization 4/9/2019 Smart Targeting Consumers: DX Marketing's Autonomous Data Warehousing Future 4/9/2019 Modern Machine Learning with Oracle Analytics Cloud and Autonomous Data Warehouse Cloud 4/10/2019

Oracle ACE Associate Dayalan Punniyamoorthy
Dayalan Punniyamoorthy

Extract “Partial Data” from ASO Cube of Oracle Planning and Budgeting Cloud in a Readable Format 4/7/2019

Oracle ACE Director Francis Mignault
Francis Mignault

What Every DBA Needs to Know About Oracle Application Express 4/8/2019 Oracle Forms and Oracle Application Express: The Odd Couple 4/10/2019 Worlds Collide! APEX and Digital Assistants Revolutionize Your ERP Applications 4/11/2019

Oracle ACE Associate Fred Denis
Fred Denis

Lessons Learned in Exadata Patching (Including 18c and in the Cloud) 4/8/2019 Must-Have Free Scripts When Working With Exadata / GI / ASM / opatch 4/9/2019

Oracle ACE Ilmar Kerm
Ilmar Kerm

Oracle Database Infrastructure as Code with Ansible 4/8/2019 Implementing Incremental Forever Strategy for Oracle Database Backups 4/9/2019

Oracle ACE Director Martin Klier
Martin Klier

YOUR Machine and MY Database - A Performing Relationship!? (2019 edition 4/10/2019 Oracle Core: Database I/O 4/10/2019

Oracle ACE Michael Barone
Michael Barone

OAUG: E-Business Suite Security SIG -- On-Premise and Cloud Security 4/11/2019

Oracle ACE Michael Messina
Michael Messina

MySQL 8 New Features, Updates and Changes 4/9/2019 MySQL Database Security 4/10/2019 The Oracle Database Security Assessment Tool: Know Your Security Risks 4/10/2019 Oracle Database Sharding - Architecture and Concepts 4/10/2019

Oracle ACE Director Osama Mustafa
Osama Mustafa

Best Practices for Virtualizing Oracle RAC with VMware Cloud on AWS 4/8/2019 Using Python With Oracle Database 4/9/2019

Oracle ACE Roger Cornejo
Roger Cornejo

Scale-Up Your Use of the Advisor Framework 4/8/2019 Using the Dynamic Oracle Performance Analytics Framework 4/10/2019

Oracle ACE Director Susan Behn
Susan Behn

Data Security-Wizarding with EBS Security Wizards 4/7/2019 OAF Personalization’s 2019 - Quick Innovation Wins 4/8/2019 Enforcing Business Rules With Approvals Management (AME) in OM, GL, PO, AP and User Management 4/9/2019 Role Based Access Controls – Side by Side Comparison of EBS and Cloud 4/9/2019 RBAC Training - Automated processes for new users and roles with approval processes in AME 4/10/2019

Oracle ACE Tim Warnet
Tim Warner

Closing the Workforce Management Circle – Using PaaS to Extend SaaS 4/10/2019   Other Events Featuring Oracle ACEs

 

About the Oracle ACE Program Recognized for their technical expertise, Oracle ACEs contribute knowledge with articles, technical advice, blog posts, presentations, and tweets. Join the community and learn from their insights and experience. Learn more.

Podcast: Polyglot Programming and GraalVM

Tue, 2019-03-19 23:15

How many programming languages are there? I won’t venture a guess. There must be dozens, if not hundreds. The 2018 State of the Octoverse Report from Github identified the following as the top ten most popular languages among GitHub contributors:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C++
  1. C#
  2. TypeScript
  3. Shell
  4. C
  5. Ruby

So the word “polyglot” definitely describes the world of the software coder.

Polyglot programming is certainly nothing new, but as the number of languages grows, and as language preferences among coders continue to evolve, what happens to decisions about which language to use in a particular project? In this program we'll explore the meaning and evolution of polyglot programming, examine the benefits and challenges of mixing and matching different languages, and then discuss the GraalVM project and its impact on polyglot programming.

This is Oracle Groundbreakers Podcast #364. It was recorded on Monday February 11, 2019. Time to listen...

The Panelists Listed alphabetically Roberto Cortez Roberto Cortez
Java Champion
Founder and Organizer, JNation
Twitter LinkedInJava Champion Dr. Chris Seaton Dr. Chris Seaton, PhD
Research Manager, Virtual Machine Group, Oracle Labs
Twitter LinkedIn Oleg Selajev Oleg Selajev
Lead Developer Advocate, GraalVM, Oracle Labs
Twitter LinkedIn  Additional Resources Coming Soon
  • Dmitry Kornilov, Tomas Langer, Jose Rodriguez, and Phil Wilkins discuss the ins, outs, and practical applications of Helidon, the lightweight Java microservices framework.
  • What's Up with Serverless? A panel discussion of where Serverless fits in the IT landscape.
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018
Subscribe Never miss an episode! The Oracle Groundbreakers Podcast is available via: Participate

If you have a topic suggestion for the Oracle Groundbreakers Podcast, or if you are interested in participating as a panelists, please post a comment. We'll get back to you right away.

Enterprise applications meet cloud native

Fri, 2019-03-15 08:54

Speaking with Enterprise customers, many are adopting a cloud-native strategy for new, in-house development projects. This approach of short development cycles, iterative functional delivery and automated CI/CD tooling is allowing them to deliver innovation for users and customers quicker than ever before. One of Oracle’s top 10 predictions for developers in 2019 is that legacy, enterprise applications jump to cloud-native development approaches.

The need to move to cloud-native is seated in the fact that, at heart, all companies are software companies. Those that can use software to their advantage, to speed, automate their business and make it easier for their customers to interact with them, win.  This is the nature of business today, and the reason that start-ups, such as Uber, can disrupt whole existing industries.

Cloud native technologies like Kubernetes, Docker containers, micro-services and functions provide the basis to scale, secure and enable these new solutions. 

However, enterprises typically have a complex stack of applications and infrastructure; this usually means monolithic custom or ISV applications that are anything but cloud-native. These new cloud-native solutions need to be able to interact with these legacy systems but are running in the cloud rather an on-premises and need delivery cycles of days rather than months. Enterprises need to address this technical debt in order to realise the full benefits of a cloud-native approach. Re-writing these monoliths is not practical in the short-term due to resource and time needed. So, what are the options to modernise enterprise applications?

Move the Monolith

Moving these applications to the cloud can realise the cloud economics of elasticity and pay for what you use. This thinks of infrastructure as code rather than physical compute, network and storage. Using tools such as Terraform – https://www.terraform.io – to create and delete infrastructure resources and Packer – https://www.packer.io – to manage machine images, means we can create environment when needed and tear down when not. Although this does not immediately address modernisation of the application itself, it does start to automate the infrastructure and begin to integrate them into cloud native development and delivery. https://blogs.oracle.com/developers/build-oracle-cloud-infrastructure-custom-images-with-packer-on-oracle-developer-cloud

Containerise and Orchestrate 

A cloud native strategy is largely based on running applications in Docker containers to give the flexibility of deployment on premises and across different cloud providers. A common approach is to containerise existing applications and run them on premises before moving to the cloud. 

Many enterprise applications, both in-house developed and ISV supplied, are Weblogic based and enterprises are looking to do the same with these. Weblogic now runs in docker containers, so the same approach can be taken – https://hub.docker.com/_/oracle-weblogic-server-12c.   

As initial, and suitable workloads (workloads that have less on-prem intergration points, or are good candidates from a compliance standpoint) become containerised and moved to the cloud, the management and orchestration of containers into solutions begins to become an issue. Container management or orchestration platforms such as Kubernetes, Docker Swarm etc are being adopted. Kubernetes is emerging as the platform of choice for enterprises to manage containers in the cloud. Oracle has developed a Weblogic Kubernetes operator that allows Kubernetes to understand and manage Weblogic domains, clustering, etc. https://github.com/oracle/weblogic-kubernetes-operator

Integrating with version control like Git Hub, secure docker repositories and using CI/CD tooling to deploy to Kubernetes, really brings these enterprise applications to the core of a cloud native strategy. It also means existing Weblogic and Java skills in the organisation continue to be relevant in the cloud. 

Breaking It Down

To fully benefit from running these applications in the could, the functionality needs to be integrated with the new cloud native services and also to become more agile. An evolving pattern is to take an agile approach, taking a series of iterations to refactoring the enterprise application. A first step is to separate the UI from the functional code and create API’s to access the business functionality. This will allow new cloud native applications access to the required functionality and facilitate the shorter delivery cycles enterprises are demanding. Over time, these services can be rebuilt and deployed as cloud services, eventually migrate away from the legacy application. Helidon is a collection of java libraries for writing microservices that helps to re-use existing java skills to re-developing the code behind the services. 

As more and more services are deployed management, versioning and monitoring become increasingly important. Using a tool like a service mesh is evolving as the way to do this. A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. Istio is evolving as an enterprise choice and can easily be installed on Kubernetes. 

In Conclusion

More and more enterprises are adopting a cloud native approach for new development projects. They are also struggling with the technical debt of large monolithic enterprise applications when trying to modernise them. However, there are a number of strategies and technologies that be used to help migrate and modernise these legacy applications in the cloud. With the right approach existing skills can be maintained and evolved into a container based, cloud native environment.  

Kata Containers: An Important Cloud Native Development Trend

Thu, 2019-03-14 15:18
Introduction

One of Oracle’s top 10 predictions for developers in 2019 was that a hybrid model that falls between virtual machines and containers will rise in popularity for deploying applications.

Kata Containers are a relatively new technology that combine the speed of development and deployment of (Docker) containers with the isolation of virtual machines. In the Oracle Linux and virtualization team we have been investigating Kata Containers and have recently released Oracle Container Runtime for Kata on Oracle Linux yum server for anyone to experiment with. In this post, I describe what Kata containers are as well as some of the history behind this significant development in the cloud native landscape. For now, I will limit the discussion to Kata as containers in a container engine. Stay tuned for a future post on the topic of Kata Containers running in Kubernetes.

History of Containerization in Linux

The history of isolation, sharing of resources and virtualization in Linux and in computing in general is rich and deep. I will skip over much of this history to focus on some of the key landmarks on the way there. Two Linux kernel features are instrumental building blocks for the Docker Containers we’ve become so familiar with: namespaces and cgroups.

Linux namespaces are a way to partition kernel resources such that two different processes have their own view of resources such as process IDs, file names or network devices. Namespaces determine what system resources you can see.

Control Groups or cgroups are a kernel feature that enable processes to be grouped hierarchically such that their use of subsystem resources (memory, CPU, I/O, etc) can be monitored and limited. Cgroups determine what system resources your can use.

One of the earliest containerization features available in Linux combine both namespaces and cgroups was Linux Containers (LXC). LXC offered a userspace interface to make the Linux kernel containment features easy to use and enabled the creation of system or application containers. Using LXC, you could run, for example, CentOS 6 and Oracle Linux 7, two completely different operating systems with different userspace libraries and versions on the same Linux kernel.

Docker expanded on this idea of lightweight containers by adding packagaging, versioning and component reuse features. Docker Containers have become widely used because they appealed to developers. They shortened the build-test-deploy cycle because they made it easier to package and distribute an application or service as a self-contained unit, together with all the libraries needed to run it. Their popularity also stems from the fact that they appeal to developers and operators alike. Essentially, Docker Containers bridge the gap between dev and ops and shorten the cycle from development to deployment.

Because containers —both LXC and Docker-based— share the same underlying kernel, it’s not inconceivable that an exploit able to escape a container could access kernel resources or even other containers. Especially in multi-tenant environments, this is something you want to avoid.

Projects like Intel® Clear Containers Hyper runV took a different approach to parceling out system resources: their goal was to combine the strong isolation of VMs with the speed and density (the number of containers you can pack onto a server) of containers. Rather than relying on namespaces and cgroups, they used a hypervisor to run a container image.

Intel® Clear Linux OS Containers and Hyper runV came together in Kata Containers, an open source project and community, which saw its first release in March of 2018.

Kata Containers: Best of Both Worlds

The fact that Kata Containers are lightweight VMs means that, unlike traditional Linux containers or Docker Containers, Kata Containers don’t share the same underlying Linux kernel. Kata Containers fit into the existing container ecosystem because developers and operators interact with them through a container runtime that adheres to the Open Container Initiative (OCI)specification. Creating, starting, stopping and deleting containers works just the way it does for Docker Containers.

Image by OpenStack Foundation licensed under CC BY-ND 4.0

In summary, Kata Containers:

  • Run their own lightweight OS and a dedicated kernel, offering memory, I/O and network isolation
  • Can use hardware virtualization extensions (VT) for additional isolation
  • Comply with the OCI (Open Container Initiative) specification as well as CRI (Container Runtime Interface) for Kubernetes
Installing Oracle Container Runtime for Kata

As I mentioned earlier, we’ve been researching Kata Containers here in the Oracle Linux team and as part of that effort we have released software for customers to expermiment with. The packages are available on Oracle Linux yum server and its mirrors in Oracle Cloud Infrastructure (OCI). Specifically, we’ve released a kata-runtime and related compontents, as well an optimized Oracle Linux guest kernel and guest image used to boot the virtual machine that will run a container.

Oracle Container Runtime for Kata relies on QEMU and KVM as the hypervisor to launch VMs. To install Oracle Container Runtime for Kata on a bare metal compute instance on OCI:

Install QEMU

Qemu is available in the ol7_kvm_utils repo. Enable that repo and install qemu

sudo yum-config-manager --enable ol7_kvm_utils sudo yum install qemu Install and Enable Docker

Next, install and enable Docker.

sudo yum install docker-engine sudo systemctl start docker sudo systemctl enable docker Install kata-runtime and Configure Docker to Use It

First, configure yum for access to the Oracle Linux Cloud Native Environment - Developer Preview yum repository by installing the oracle-olcne-release-el7 RPM:

sudo yum install oracle-olcne-release-el7

Now, install kata-runtime:

sudo yum install kata-runtime

To make the kata-runtime an available runtime in Docker, modify Docker settings in /etc/sysconfig/docker. Make sure SELinux is not enabled.

The line that starts with OPTIONS should look like this:

$ grep OPTIONS /etc/sysconfig/docker OPTIONS='-D --add-runtime kata-runtime=/usr/bin/kata-runtime'

Next, restart Docker:

sudo systemctl daemon-reload sudo systemctl restart docker Run a Container Using Oracle Container Runtime for Kata

Now you can use the usual docker command to run a container with the --runtime option to indictate you want to use kata-runtime. For example:

sudo docker run --rm --runtime=kata-runtime oraclelinux:7 uname -r Unable to find image 'oraclelinux:7' locally Trying to pull repository docker.io/library/oraclelinux ... 7: Pulling from docker.io/library/oraclelinux 73d3caa7e48d: Pull complete Digest: sha256:be6367907d913b4c9837aa76fe373fa4bc234da70e793c5eddb621f42cd0d4e1 Status: Downloaded newer image for oraclelinux:7 4.14.35-1909.1.2.el7.container

To review what happened here. Docker, via the kata-runtime instructed KVM and QMEU to start a VM based on a special purpose kernel and minimized OS image. Inside the VM a container was created, which ran the uname -r command. You can see from the kernel version that a “special” kernel is running.

Running a container this way, takes more time than a traditional container based on namespaces and cgroups, but if you consider the fact that a whole VM is launched, it’s quite impressive. Let’s compare:

# time docker run --rm --runtime=kata-runtime oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m2.480s user 0m0.048s sys 0m0.026s # time docker run --rm oraclelinux:7 echo 'Hello, World!' Hello, World! real 0m0.623s user 0m0.050s sys 0m0.023s

That’s about 2.5 seconds to launch a Kata Container versus 0.6 seconds to launch a traditional container.

Conclusion

Kata Containers represent an important phenomenon in the evolution of cloud native technologies. They address both the need for security through virtual machine isolation as well as speed of development through seamless integration into the existing container ecosystem without compromising on computing density.

In this blog post I’ve described some of the history that brought us Kata Containers as well as showed how you can experiment with them yourself with packages using Oracle Container Runtime for Kata.

Getting Your Feet Wet With OCI Streams

Thu, 2019-03-14 14:35

Back in December we announced the development of a new service on Oracle Cloud Infrastructure called Streaming.  The announcement, product page and documentation have a ton of use cases and information on why you might use Streaming in your applications, so let's take a look at the how.  The OCI Console allows you to create streams and test them out via the UI dashboard, but here's a simple example of how to both publish and subscribe to a stream in code via the OCI Java SDK.

First you'll need to create a stream.  You can do that via the SDK, but it's pretty easy to do via the OCI Console.  From the sidebar menu, select Analytics - Streaming and you'll see a list of existing streams in your tenancy and selected compartment.

Click 'Create Stream' and populate the dialog with the information requested:

After your stream has been created you can view the Stream Details page, which looks like this:

As I mentioned above, you can test out stream publishing by clicking 'Produce Test Message' and populating the message and then test receiving by refreshing the list of 'Recent Messages' on the bottom of the Stream Details page.

To get started working with this stream in code, download the Java SDK (link above) and make sure it's on your classpath.  After you've got the SDK ready to go, create an instance of a StreamClient which will allow you to make both 'put' and 'get' style requests.  Producing a message to the stream looks like so:

Reading the stream requires you to work with a Cursor.  I like to work with group cursors, because they handle auto committing so I don't have to manually commit the cursor, and here's how you'd create a group cursor and use it to get the stream messages.  In my application I have it in a loop and reassign the cursor that is returned from the call to client.getMessages() so that the cursor always remains open and active.

And that's what it takes to create a stream, produce a message and read the messages from the stream.  It's not a difficult feature to implement and the performance is comparable to Apache Kafka in my observations, but it's nice to have a native OCI offering that integrates well into my application.  There are also future integration plans for upcoming OCI services that will eventually allow you to publish to a stream, so stay tuned for that.

OCI New Service Roundup

Wed, 2019-03-13 16:28
This blog was originally published by Jesse Butler on the Cloud Native blog. 

Nine Ways Oracle Cloud is Open

Wed, 2019-03-13 12:58

In the recent Break New Ground paper, 10 Predictions for Developers in 2019, openness was cited as a key factor. Developers want to choose their clouds based on openness. They want a choice of languages, databases, and compute shapes, among other things. This allows them to focus on what they care about – creating – without ops concerns or lock in. In this post, we outline the top ways that Oracle is delivering a truly open cloud. 

Databases

Oracle Cloud’s Autonomous Database, which is built on top of Oracle Database, conforms to open standards, including ISO SQL:2016, JDBC, Python PEP 249, ODBC, and many more. Autonomous Database is a multi-model database and supports relational as well as non-relational data, such as JSON, Graph, Spatial, XML, Key/Value, Text, amongst others. Because Oracle Autonomous Database is built on Oracle Database technology, customers can “lift and shift” workloads from/to other Oracle Database environments, including those running on third-party clouds and on-premises infrastructure. This flexibility makes Oracle Autonomous Database a truly open cloud service compared to other database cloud services in the market. Steve Daheb from Oracle Cloud Platform provides more information in this Q&A.

In addition, Oracle MySQL continues to be the world's most popular open source database (source code) and is available in Community and Enterprise editions. MySQL implements standards such as ANSI/ISO SQL, ODBC, JDBC and ECMA. MySQL can be deployed on-premises, on Oracle Cloud, and on other clouds.

Integration Cloud

With Oracle Data Integration Platform, you can access numerous Oracle and non-Oracle sources and targets to integrate databases with applications. For example, you can use MySQL databases on a third-party cloud as a source for Oracle apps, such as ERP, HCM, CX, NetSuite, and JD Edwards. In addition, Integration Cloud allows you to integrate Oracle Big Data Cloud, Hortonworks Data Platform, or Cloudera Enterprise Hub with a variety of sources: Hadoop, NoSQL, or Oracle Database.

You can also connect apps on Oracle Cloud with third-party apps. Consider a Quote to Order system. When a customer accepts a quote, the salesperson can update it in the CRM system, leverage Oracle’s predefined integration flows, with Oracle ERP Cloud, and turn the quote into an order.  

Java

Java is one of the top programming languages on Github (Oracle Code One 2018 keynote), with over 12 million developers in the community. All development for Java happens in OpenJDK and all design and code changes are visible to the community. Therefore, the evolution of ongoing projects and features is transparent. Oracle has been talking with developers who are and aren’t using Java to ensure that Java remains open and free, while making enhancements to OpenJDK. In 2018, Oracle open sourced all remaining closed source features: Application Class Data Sharing, Project ZGC, Flight Recorder and Mission Control. In addition, Oracle delivers binaries that are pure OpenJDK code, under the GPL, giving developers freedom to distribute them with frameworks and applications.

Oracle Cloud Native Services, including Oracle Container Engine for Kubernetes

Cloud Native Services include the Oracle Container Engine for Kubernetes and Oracle Cloud Infrastructure Registry. Container Engine is based off unmodified Kubernetes codebase and clusters can support bare-metal nodes, virtual machines or heterogeneous BM/VM environments. Oracle’s Registry is based off open Docker v2 standards, allowing you to use the same Docker commands to interact with it as you would with Docker Hub. Container images can be used on-premises and on Container Engine giving you portability. It can also interoperate with third-party registries and Oracle Cloud Infrastructure Registry with third-party Kubernetes environments. In addition Oracle Functions is based off the open source Fn Project. Code written for Oracle Functions will therefore run not only on Oracle Cloud, but with Fn clusters on third-party clouds and on-premises environments as well.

Oracle offers the same cloud native capabilities as part of Oracle Linux Cloud Native Environment. This is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily deployed, have been tested for interoperability, and for which enterprise-grade support is offered. With Oracle’s Cloud Native Framework, users can run cloud native applications in the Oracle Cloud and on-premises, in an open hybrid cloud and multi-cloud architecture.

Oracle Linux Operating System

Oracle Linux, which is included with Oracle Cloud subscriptions at no additional cost, is a proven, open source operating system (OS) that is optimized for performance, scalability, reliability, and security. It powers everything in the Oracle Cloud – Applications and Infrastructure services. Oracle extensively tests and validates Oracle Linux on Oracle Cloud Infrastructure, and continually delivers innovative new features to enhance the experience in Oracle Cloud.

Oracle VM VirtualBox

Oracle VM VirtualBox is the world’s most popular, open source, cross-platform virtualization product. It lets you run multiple operating systems on Mac OS, Windows, Linux, or Oracle Solaris. Oracle VM VirtualBox is ideal for testing, developing, demonstrating, and deploying solutions across multiple platforms on one machine. It supports exporting of virtual machines to Oracle Cloud Infrastructure and enables them to run on the cloud. This functionality facilitates the experience of using VirtualBox as the development platform for the cloud.

Identity Cloud Services

Oracle Identity Cloud Service provides 100% API coverage of all product capabilities for rich integration with custom applications. It allows compliance with open standards such as SCIM, REST, OAuth and OpenID Connect for easy application integrations. Customers can easily consume these APIs in their applications to take advantage of identity management capabilities.

Oracle Identity Cloud Service seamlessly interoperates with on-premises identities in Active Directory to provide Single Sign On between Cloud and On-Premises applications. Through its Identity Bridge component, Identity Cloud can synchronize all the identities and groups from Active Directory into its own identity store in the cloud. This allows organizations to take advantage of their existing investment in Active Directory. And, they can extend their services to Oracle Cloud and external SaaS applications.

Oracle Blockchain Platform

Oracle Blockchain Platform is built on open source Hyperledger Fabric making it interoperable with non-Oracle Hyperledger Fabric instances deployed in your data center or in third-party clouds. In addition, the platform uses REST APIs for plug-n-play integration with Oracle SaaS and on-premises apps such as NetSuite ERP, Flexcube core banking, Open Banking API Platform, among others.

Oracle Mobile Hub (Mobile Backend as a Service – MBaaS)

Oracle Mobile Hub is an open and flexible platform for mobile app development. With Mobile Hub, you can:

  • Develop apps for any mobile client: iOS or Android based phones

  • Connect to any backend via a standard RESTful interfaces and SOAP web services

  • Support both native mobile apps and hybrid apps. For example, you can develop with Swift or Objective C for native iOS apps, Java for native Android apps, and JavaScript for Hybrid mobile apps

In addition, Oracle Visual Builder (VB) is a cloud-based software development Platform as a Service (PaaS) and a hosted environment for your application development infrastructure. It provides an open source, standards-based solution to develop, collaborate on, and deploy applications within Oracle Cloud that provides an easy way to create and host web and mobile applications in a secure cloud environment.

Takeaway

In choosing a cloud vendor, openness can provide a significant advantage, allowing you to choose amongst languages, databases, hardware, clouds, and on-premises infrastructure.  With a free trial on Oracle Cloud, you can experience the benefits of these open technologies – no strings attached.

Feel free to start a conversation below.

How to Use OSvC Restful APIs in Python: Quickly and Easily

Tue, 2019-03-12 15:45

Have you ever had to quickly act to create an automation process to process data in Oracle Service Cloud such as restore, update or even delete wrong data? If so, you'll know that there are different approaches. So what do you do? Many people have found success by writing a PHP Script and hosting it in Oracle Service Cloud Customer Portal (CP). But there are a few things you should know before you take down this road to ensure you will not overload your Customer Portal Server or create a bad experience to your end-user customers or generate extra sessions to your license compliance agreement. This post will tell you what you need to know to take a different road and with just a few lines of Python script create a process, so that will let you successfully implement integration with little time investment.

First, make sure you have python installed in your local. Take a look at python documentation online to get your first step done. If you want to play with it first, Anaconda Distribution is the easiest way to go.

Let's get it started. Here is a simple python script you can use to make a REST API request using Python. Make sure you replace variable values where it says [REPLACE ....]

import requests import json import base64 from requests.auth import HTTPBasicAuth def main(): try: site = '[REPLACE FOR YOUR SITE]' payload = {"id":[REPLACE FOR YOUR REPORT ID], "filters":[{"name": "[REPLACE FOR REPORT FIELD]","operator": {"lookupName": "="},"values":"[VALUE]"}]} response = requests.post(site +'/analyticsReportResults', auth=HTTPBasicAuth('[REPLACE FOR YOUR USER]', '[REPLACE FOR PASSWORD]'), data=json.dumps(payload)) json_data = json.loads(response.text) print(json_data['rows']) except Exception as e: print('Error: %s' % e) main()

 

Now that you know how to create a Python script to make an API request quickly, you are ready to solve data issues such as restore, backup, updates, creation, deletion, etc. You can create a request from one site to insert in another site. e.g., You have a restored place or you have a backup, then you want to create the same process to request from A to insert in B.

Make sure you won't create parallel threads that will to massive attack your OSvC Server.

Yep, that's it! I hope this helps!

Kubernetes and the "Platform Engineer"

Tue, 2019-03-05 22:56

In recent conversations with Enterprise customers, it is becoming clear that a separation of concerns is emerging for those delivering production applications on top of Kubernetes infrastructure.  The application developers building the containerized apps driven by business requirements, and the “Platform Engineers”, owning and running the supporting Kubernetes infrastructure, and platform components.  For those familiar with DevOps, SRE (pick your term) – this is arguably nothing new, but the consolidation of these teams around the Kubernetes API is leading to something altogether different.  In short, the Kubernetes YAML file (via the Kubernetes API) is becoming the contract or hand-off between application developers and the platform team (or more succinctly between dev and ops).

In the beginning, there was PaaS

Well, actually there was infrastructure! – but for application developers, there was an awful lot of pieces to assemble (compute, network, storage) to deliver an application.  Technologies like Virtualization and Infrastructure as Code (Terraform et al) made it easier to automate the infrastructure part, but still, a lot of moving parts.  Early PaaS (Platform as a Service) pioneers, recognizing this complexity for developers, created (PaaS) platforms, abstracting away much of the infrastructure (and complexity), albeit for a very targeted (or “opinionated”) set of application use cases or patterns – which is fine if your application fits into that pattern, but if not, you are back to dealing with infrastructure.

Then Came CaaS

Following the success of Container technology popularized in recent years by Docker, so called “Containers as a Service” offerings emerged a few years back, sitting somewhere between IaaS and PaaS, CaaS services abstract some of the complexity of dealing with raw infrastructure, allowing teams to deploy and operate container based applications without having to build, setup and maintain their own container orchestration tooling and supporting infrastructure.

The emergence of CaaS also coincided largely with the rise of Kubernetes as the de facto standard in container orchestration.  The majority of CaaS offerings today are managed Kubernetes offerings (not all offerings are created equal though, see The Journey to Enterprise Managed Kubernetes for more details).  As discussed previously, Kubernetes has essentially become the new Operating System for the Cloud, and arguably the modern application server, as Kubernetes continues to move up the stack.  At a practical level, this means that in addition to the benefits of a CaaS described above, customers benefit from standardization, and portability of their container applications across multiple cloud providers and on-prem (assuming those providers adhere to and are conformant with upstream Kubernetes).

Build your Own PaaS?

Despite CaaS and the standardization of Kubernetes for delivering these, there is still a lot of potential complexity for developers.  With “complexity”, “cultural changes” and “lack of training” recently cited as some of the most significant inhibitors to container and Kubernetes adoption, we can see there’s still work to do.  An interesting talk at KubeCon Seattle played on this with the title: “Kubernetes is Not for Developers and Other Things the Hype Never Told You”.

Enter the platform engineer.  Kubernetes is broad and deep, and only a subset of it ultimately needs be exposed to end developers in many cases.   As an enterprise that wants to offer a modern container platform to its developers, there are a lot of common elements/tooling that every end developer/application team consuming the platform shouldn’t have to reinvent.  Examples include (but are not limited to): monitoring, logging, service mesh, secure communication/TLS, ingress controllers, network policies, admission controllers etc…  In addition to common services being presented to developers, the platform engineer can even extend Kubernetes (via extension APIs), with things like the Service Catalog/Open Service Broker to facilitate easier integration for developers with other existing cloud services, or by providing Kubernetes Operators, helpers essentially that developers can consume for creating (stateful) services in their clusters (see examples here and here).

The platform engineer then in essence, has an opportunity to carve out the right cross section of Kubernetes (hence build your own PaaS) for the business, both in terms of the services that are exposed to developers to promote reuse, but also in enforcement of business policy (security and compliance).

Platform As Code

And the fact that you can leverage the same Kubernetes API or CLI (“Kubectl”) and deployment (YAML) file to drive the above platform, has led some to talk about the approach as “Platform as code” – essentially an evolution of Infrastructure as Code, but in this case, native Kubernetes interfaces are driving the entire creation of a complete Kubernetes based application platform for enterprise consumption.

The platform engineer and the developer now have a clear separation of concerns (with the appropriate Kubernetes RBAC roles and role bindings in place!).  The platform engineer can check the complete definition of the platform described above into source control.  Similarly, the developer consuming the platform, checks their Kubernetes application definition into source control – and the Kubernetes YAML file/definition becomes the contract (and enforcement point) between the developer and platform engineer

Platform engineers ideally have a strong background in infrastructure software, networking and systems administration.  Essentially, they are working on the (Kubernetes) platform to deliver a product/service to (and in close collaboration with) end development teams.

In the future, we would expect there to be additional work in the community around both sides of this contract.  Both for developers, and how they can discover what common services are provided by the platform being offered, and for platform engineers in how they can provide (and enforce) a clear contract to their development team customers.

Four New Oracle Cloud Native Services in General Availability

Tue, 2019-03-05 17:45

This post was jointly written by Product Management and Product Marketing for Oracle Cloud Native Services. 

For those who participated in the Cloud Native Services Limited Availability Program, a sincere thank you! We have an important update: four more Cloud Native Services have just gone into General Availability.

Resource Manager for DevOps and Infrastructure as Code

Resource Manager is a fully managed service that uses open source HashiCorp Terraform to provision, update, and destroy Oracle Cloud Infrastructure resources at-scale. Resource Manager integrates seamlessly with Oracle Cloud Infrastructure to improve team collaboration and enable DevOps. It can be useful for repetitive deployment tasks such as replicating similar architectures across Availability Domains or large numbers of hosts. You can learn more about Resource Manager through this blog post.

Streaming for Event-based Architectures

Streaming Service provides a “pipe” to flow large volumes of data from producers to consumers. Streaming is a fully managed service with scalable and durable storage for ingesting large volumes of continuous data via a publish-subscribe (pub-sub) model. There are many use cases for Streaming: gathering data from mobile and IoT devices for real-time analytics, shipping logs from infrastructure and applications to an object store, and tracking current financial information to trigger stock transactions, to name a few. Streaming is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and provides Terraform integration. Additional information on Streaming is available on this blog post.

Monitoring and Notifications for DevOps

Monitoring provides a consistent, integrated method to obtain fine-grained telemetry and notifications for your entire stack. Monitoring allows you to track infrastructure utilization and respond to anomalies in real-time. Besides performance and health metrics available out-of-the-box for infrastructure, you can get custom metrics for visibility across the stack, real-time alarms based on triggers and Notifications via email and PagerDuty. The Metrics Explorer provides a comprehensive view across your resources. You can learn more through these blog posts for Monitoring and Notifications. In addition, using the Data Source for Grafana, users can create Grafana dashboards for monitoring metrics. 

Next Steps

We would like to invite you to try these services and provide your feedback below. A free $300 trial is available at cloud.oracle.com/tryit. To evaluate other Cloud Native Services in Limited Availability, including Functions for serverless applications, please complete this sign-up form.

Why You Should Be Using Grafana With OCI

Sun, 2019-03-03 17:51

A few days ago we announced the availability of the Oracle Cloud Infrastructure datasource for Grafana. I've heard about Grafana quite a bit over the past few years and it was used to monitor our cloud environment in my last project before joining Oracle, but to be perfectly honest I'd never really played around with it myself.  This week I decided to change that, and I'm really glad that I did because I've already found practical uses for it that developers who host their application in Oracle's cloud can really benefit from.  I won't go into details on how to install Grafana or configure the datasource - the post linked above does a good job of that, so please refer to that to get started.  Instead, I wanted to share an immediate benefit that I came across when I created my first dashboard.

The very first graph that I created was a simple look at my Object Storage buckets.  I kept things simple and just added 3 metrics that I thought would be useful: Object Count, Stored Bytes and Uncommitted Parts.  Here's how that graph looks as of the time I wrote this article for one of my buckets:

Notice the blue line?  Yeah, so did I.  In fact, that was the very first thing that jumped out at me.  That blue line represents 15Mb of 'uncommitted parts'.  In other words, that's storage being used for either in progress, aborted or otherwise uncommitted multipart uploads.  Now 15Mb is nothing in the scope of a large, enterprise application.  In my case it's just leftovers from when I was testing out multipart upload for another blog post.  But for some applications, this number could get large.  Really large.  A project I was on a few years ago allowed users to upload potentially very large (5-20Gb) video files and handled the uploads via multipart/chunked uploads from pretty much anywhere in the world. Which, as you can imagine, means that from really poor internet connections sometimes.  The idea that we could have been paying for potentially terabytes worth of storage for unused files kind of makes me shudder, but with Grafana on OCI you'd be able to quickly and easily keep an eye on these sorts of things.  Obviously, it goes much further than this simple example, but I think it illustrates the point well enough.

To clean things up I decided to turn to the OCI CLI and grabbed a list of the outstanding multipart uploads like so:

oci os multipart list -bn doggos --all

To clean them up, unfortunately, you have to manually abort each upload.  If you've read many of my posts, you'll know that I am a big fan of Groovy for both web and scripting, so I came up with the following quick script to loop over each stranded upload and abort them:

And cleaned up all of the abandoned multipart uploads.  How does your organization use Grafana?  Feel free to share in the comments below.

Pages