OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 1 day 1 hour ago

Latest Blog Posts from Oracle ACEs: April 28 - May 4, 2019

Thu, 2019-05-09 05:00

The chances of having a movie theater to yourself these days are slim. But while the rest of the world is focused on learning the fates of various Marvel characters in the latest Avengers epic, the members of the Oracle ACE program listed below demonstrated super will power last week by devoting their screen time to hammering out these blog posts.  The least you can do to reward that kind of effort is to take a look, right?


Oracle ACE Director  Oracle ACE Directors

Oracle ACE Director Opal AlaphatOpal Alaphat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

Oracle ACE  Oracle ACEs

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.


Oracle ACE Anju GargAnju Garg
Corporate Trainer, Author, Speaker, Blogger
New Delhi, India


Oracle ACE Bert ScalzoBert Scalzo
Technical Product Manager: Databases
Flower Mound, Texas


Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA, SYDLE
Belo Horizonte, Brazil


Oracle ACE Fabio PradoFabio Prado
Instrutor, Oramaster Treinamentos em Bancos de Dados
Sao Paulo, Brazil


Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil


Oracle ACE Leonardo GonzalezLeonardo Gonzalez Cruz
SOA Architect, Services & Processes Solutions


Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina


Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France


Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Malahide, Ireland


Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida


Oracle ACE Wataru MorohashiWataru Morohashi
Solution Architect, Hewlett-Packard Japan, Ltd.
Tokyo, Japan

Oracle ACE Associates  Oracle ACE Associates

Oracle ACE Associate Abigail Gils-HaighAbigail Giles-Haigh
Chief Data Science Officer, Vertice
United Kingdom


Oracle ACE Associate Diana RobeteDiana Robete
Team Lead/Senior Database Administrator, First4 Database Partners Inc
Calgary, Canada


Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia


Oracle ACE Associate Emiliano FusagliaEmiliano Fusaglia
Principal Oracle RAC DBA/Data Architect, Trivadis
Lausanne, Switzerland


Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin


Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland


Oracle ACE Associate Heema SatapathyHeema Satapathy
Senior Principal Consultant, BIAS Corporation
United States


Oracle ACE Associate Lykle ThijssenLykle Thijssen
Principal Architect, eProseed
Utrecht, Netherlands


Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia


Oracle ACE Associate Roy SalazarRoy Salazar
Senior Oracle Database Consultant, Pythian
Costa Rica


Oracle ACE Associate Mark DaynesMark Daynes
Managing Director, Beyond Systems Ltd
Manchester, United Kingdom

Additional Resources

Spotlight on Oracle ACE Director Ruben Rodriguez

Wed, 2019-05-08 05:00

Oracle ACE Director Ruben Rodriguez is a Cloud and Mobile Solution Specialist with avanttic Consultoría Tecnológica in Madrid, Spain. He graduated from the Universidad Alfonso X El Sabio in Madrid in 2011 with a degree in Computer Science, then made his way through a variety of IT jobs in Spain and the UK before landing at avantic in 2015. Ruben first entered the Oracle ACE program in October 2017 and was confirmed as an Oracle ACE Director in November 2018. Active in the community, Ruben is a blogger and frequent conference speaker. In December 2019 Packt Publishing will publish Professional Oracle Mobile, written by Ruben and co-author Soham Dasgupta.

Watch the video and get the story from Ruben himself.

Additional Resouces

Latest Blog Posts from Oracle ACEs: April 21-27, 2019

Tue, 2019-05-07 05:00

Blogs in bloom...

Winter is mostly a memory, spring is in the air, and people naturally want to... sit inside and crank out a bunch of blog posts! These members of the Oracle ACE Program resisted the temptation to enjoy some fresh air and sunshine so they could share some of their expertise with you. Take it in.

ACE Directors

Oracle ACE Director Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland


Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland


Oracle ACE Director Kamran Agayev A.Kamran Agayev A.
Oracle DBA Expert, Azercell


Oracle ACE Director Richard FooteRichard Foote
Director/Principal Consultant, Richard Foote Consulting Pty Ltd
Canberra, Australia


Oracle ACE Director Timo HahnTimo Hahn
Principal Software Architect, virtual 7 GmbH

Oracle ACEs

Oracle ACE Bert ScalzoBert Scalzo
Technical Product Manager: Databases
Flower Mound, Texas


Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland


Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA, SYDLE
Belo Horizonte, Brazil


Oracle ACE Emrah MeteEmrah Mete
Solution Architect/Data Engineer, Turkcell Technology
Istanbul, Turkey


Oracle ACE Fabio PradoFabio Prado
Instrutor, Oramaster Treinamentos em Bancos de Dados
Sao Paulo, Brazil


Oracle ACE Kyle GoodfriendKyle Goodfriend
Vice President, Planning & Analytics, Accelytics Inc.
Columbus, Ohio


Oracle ACE Martien van den AkkerMartien van den Akker
Contractor: Fusion MiddleWare Implementation Specialist, Immigratie- en Naturalisatiedienst (IND)
The Hague, Netherlands


Oracle ACE Scott WesleyScott Wesley
Systems Consultant/Trainer, SAGE Computing Services
Perth, Australia


Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

ACE Associates

AOracle ACE Associate Adrian Pngdrian Png
Senior Consultant/Database Administrator, Insum


Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois


Oracle ACE Associate Dayalan PunniyamoorthyDayalan Punniyamoorthy
Oracle EPM Consultant,Vertical Edge Consulting Group
Bengaluru, India


Oracle ACE Associate Diana RobeteDiana Robete
Team Lead/Senior Database Administrator, First4 Database Partners Inc
Calgary, Canada


Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia


Oracle ACE Associate Lisandro FernigriniLisandro Fernigrini
Senior Software Developer/DBA, Kapsch TrafficCom


Oracle ACE Associate Oliver PykaOliver Pyka
Senior Database Consultant


Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

Additional Resources

Articles by Oracle ACEs - April 2019

Thu, 2019-05-02 05:00

Who you gonna ask?

While the phrase "wildly famous" may not apply to the Oracle ACE program members listed here, each has their own following, and each has earned a reputation for sharing experience and expertise. And let's face it, if you have a question about Oracle APEX, or about Autonomous Transaction Processing, are you going to ask one of the Kardashians? I don't think so.

Better you should ask one of these people, or read one of their freshly-written articles.

Oracle ACE Director Alex NuijtenAlex Nuijten
Director/Senior Oracle Developer, allAPEX
Oosterhout, Netherlands


Oracle ACE Director Alex ZaballaAlex Zaballa
Infrastructure Senior Principal, Accenture Brasil
São Paulo Area, Brazil


Oracle ACE Director Paul GuerinPaul Guerin
Database Service Delivery Leader, Hewlett-Packard


Oracle ACE Umair MansoobUmair Mansoob
Senior Database Architect, Sirius Computer Solutions
Skokie, Illinois


Oracle ACE Borys NeselovskyiBorys Neselovskyi
Solution Architect, OPITZ Consulting
Dortmund, Germany


Oracle ACE Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Dhahran, Saudi Arabia


Oracle ACE Mathias MagnussonMathias Magnusson
CEO, Evil Ape
Stockholm, Sweden

Related Resources

Oracle ACE Sessions at the Great Lakes Oracle Conference (GLOC)

Tue, 2019-04-30 05:00

On May 15-16, 2019 the Northeast Ohio Oracle Users Group will present the Great Lakes Oracle Conference in the historic Cleveland Public Hall, just about a ten minute walk from the Rock and Roll Hall of Fame and Museum, seen in the photo above.

The following members of the Oracle ACE Program will present sessions at GLOC. So if you're in the neighborhood, come on down.

For more information: Great Lakes Oracle Conference


Oracle ACE Director Gary CrisciGary Crisci
Principal Architect, General Electric
Norwalk, Connecticut


Oracle ACE Director Janice GriffinJanice Griffin
Senior Sales Engineer, Quest Software
Longmont, Colorado


Oracle ACE Director Cary MillsapCary Millsap
Vice President, User Experience Services and Solutions, Cintra Software and Services
Dallas, Texas


Oracle ACE Director Scott SpendoliniScott Spendolini
Vice President, Viscosity North America
Austin, Texas


Mike Gangler
Senior Database Specialist / Database Architect, Secure-24
Southfield, Michigan


Oracle ACE Michael MessinaMichael Messina
Senior Managing Consultant, Rolta-AdvizeX
Owensburg, Indiana


Oracle ACE Anuj MohanAnuj Mohan
Technical Account Manager, Data Intensity, LLC
Covington, Kentucky


Oracle ACE Anton NielsenAnton Nielsen
Vice President, Insum Solutions
Boston, Massachusetts


Oracle ACE Michel SchildmeijerMichel Schildmeijer
Lead Software Architect for Justis, SSC-I DJI
Gouda, Netherlands



Related Content

Two New “Dive Into Containers and Cloud Native” Podcasts

Thu, 2019-04-25 16:48

Oracle Cloud Native Services cover container orchestration and management, build pipelines, infrastructure as code, streaming and real-time telemetry. Join Kellsey Ruppel and me for two new podcasts about these services.

In the first podcast, you can learn more about three services for containers: Container Engine for Kubernetes, Cloud Infrastructure Registry, and Container Pipelines.

In the second podcast, you can learn more about Resource Manager for infrastructure as code, Streaming for event-based architectures, Monitoring for real-time telemetry, and Notifications for real-time alerts based on infrastructure changes.

You can find these and other podcasts at the Oracle Cloud Café. Please take a few minutes to listen-in and share any feedback you may have. 

Presentation Persuasion: Calls for Proposals for Upcoming Events

Thu, 2019-04-25 05:00

Sure you've got solid technical chops, and you share your knowledge through your blog, articles, and videos. But if you want to walk it like you talk it you have to get yourself in front of a live audience and keep them awake for about an hour. If you do it right, who knows? You might just spend the time after your session signing autographs and posing for selfies to calm your new fans. The first step in accomplishing all that is to respond to calls for proposals for conferences, meet-ups, and other live events like these:

  • AUSOUG Webinar Series 2019
    Ongoing series of webinars hosted by the Australian Oracle User Group. No CFP deadline posted.
  • NCOAUG Training Day 2019
    CFP Deadline: May 17, 2019
    North Central Oracle Applications User Group
    Location: Oakbrook Terrace, Ill
    Event: August 1, 2017
  • MakeIT Conference
    CFP Deadline: May 17, 2019

    Organized by the Slovenian Oracle User Group (SIOUG).
    Event: October 14-15, 2019
  • HrOUG 2019
    CFP Deadline: May 27, 2019
    Organized by the Croatian Oracle Users Group
    Event: October 15-18, 2019
  • DOAG 2019 Conference and Exhibition
    CFP Deadline: June 3, 2019
    Organized by the Deutsche Oracle Anwendergruppe (German Oracle Users Group)>
    Location: Nürnberg, Germany
    Event: November 19-20, 2019

Good luck!

Related Content

Deploying A Micronaut Microservice To The Cloud

Tue, 2019-04-23 10:17

So you've finally done it. You created a shiny new microservice. You've written tests that pass, ran it locally and everything works great. Now it's time to deploy and you're ready to jump to the cloud. That may seem intimidating, but honestly there's no need to worry. Deploying your Micronaut application to the Oracle Cloud is really quite easy and there are several options to chose from. In this post I'll show you a few of those options and by the time you're done reading it you'll be ready to get your app up and running.

If you haven't yet created an application, feel free to check out my last post and use that code to create a simple app that uses GORM to interact with an Oracle ATP instance.  Once you've created your Micronaut application you'll need to create a runnable JAR file. For this blog post I'll assume you followed my blog post and any assets that I refer to will reflect that assumption. With Micronaut creating a runnable JAR is as easy as using ./gradlew assemble or ./mvnw package (depending on which build automation tool your project uses). Creating the artifact will take a bit longer than you're probably used to if you haven't used Micronaut before. That's because Micronaut precompiles all necessary metadata for Dependency Injection so that it can minimize/reduce runtime reflection to obtain that metadata. Once your task completes you will have a runnable JAR file in the build/libs directory of your project. You can launch your application locally by running java -jar /path/to/your.jar. So to launch the JAR created from the previous blog post, I set some environment variables and run:

Which results in the application running locally:

So far, pretty easy. But we want to do more than launch a JAR file locally. We want to run it in the cloud, so let's see what that takes. The first method I want to look at is more of a "traditional" approach: launching a simple compute instance and deploying the JAR file.

Creating A Virtual Network

If this is your first time creating a compute instance you'll need to set up virtual networking.  If you have a network ready to go, skip down to "Creating An Instance" below. 

Your instance needs to be associated with a virtual network in the Oracle Cloud. Virtual cloud networks (hereafter referred to as VCNs) can be pretty complicated, but as a developer you need to know enough about them to make sure that your app is secure and accessible from the internet. To get started creating a VCN, either click "Create a virtual cloud network" from the dashboard:

Or select "Networking" -> "Virtual Cloud Networks" from the sidebar menu and then click "Create Virtual Cloud Network" on the VCN overview page:

In the "Create Virtual Cloud Network" dialog, populate a name and choose the option "Create Virtual Cloud Network Plus Related Resources" and click "Create Virtual Cloud Network" at the bottom of the dialog:

The "related resources" here refers to the necessary Internet Gateways, Route Table, Subnets and related Security Lists for the network. The security list by default will allow SSH, but not much else, so we'll edit that once the VCN is created.  When everything is complete, you'll receive confirmation:

Close the dialog and back on the VCN overview page, click on the name of the new VCN to view details:

On the details page for the VCN, choose a subnet and click on the Security List to view it:

On the Security List details page, click on "Edit All Rules":

And add a new rule that will expose port 8080 (the port that our Micronaut application will run on) to the internet:

Make sure to save the rules and close out. This VCN is now ready to be associated with an instance running our Micronaut application.

Creating An Instance

To get started with an Oracle Cloud compute instance log in to the cloud dashboard and either select "Create a VM instance":

Or choose "Compute" -> "Instances" from the sidebar and click "Create Instance" on the Instance overview page:

In the "Create Instance" dialog you'll need to populate a few values and make some selections. It seems like a long form, but there aren't many changes necessary from the default values for our simple use case. The first part of the form requires us to name the instance, select an Availability Domain, OS and instance type:


The next section asks for the instance shape and boot volume configuration, both of which I leave as the default. At this point I select a public key that I can use later on to SSH in to the machine:

Finally, select the a VCN that is internet accessible with port 8080 open:

Click "Create" and you'll be taken to the instance details page where you'll notice the instance in a "Provisioning" state.  Once the instance has been provisioned, take note of the public IP address:

Deploying Your Application To The New Instance

Using the instance public IP address, SSH in via the private key associated with the public key used to create the instance:

We're almost ready to deploy our application, we just need a few things.  First, we need a JDK.  I like to use SDKMAN for that, so I first install SDKMAN, then use it to install the JDK with sdk install java 8.0.212-zulu and confirm the installation:

We'll also need to open port 8080 on the instance firewall so that our instance will allow the traffic:

We can now upload our instance with SCP:

I've copied the JAR file, my Oracle ATP wallet and 2 simple scripts to help me out. The first script sets some environment variables:

The second script is what we'll use to launch the application:

Next, move the wallet directory from the user home directory to the root with sudo mv wallet/ /wallet and source the environment variables with . ./env.sh. Now run the application with ./run.sh:

And hit the public IP in your browser to confirm the app is running and returning data as expected!

You've just deployed your Micronaut application to the Oracle Cloud! Of course, a manual VM install is just one method for deployment and isn't very maintainable long term for many applications, so in future posts we'll look at some other options for deploying that fit in the modern application development cycle.

.gist{ border-left: none } code { padding: 2px 4px; font-size: 90%; display: inline; margin: 0;}

Latest Blog Posts from Oracle ACEs: April 14-20, 2019

Tue, 2019-04-23 10:06

In writing the blog posts listed below, the endgame for the Oracle ACE program members is simple: sharing their experience and expertise with the community. That doesn't make them superheroes, but you have to marvel at their willingness to devote time and energy to helping others.

Here's what they used their powers to produce for the week of April 14-20, 2019.


Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia


Oracle ACE Director Ludovico CaldaraLudovico Caldara
Computing Engineer, CERN
Nyon, Switzerland


Oracle ACE Director Martin Giffy D'SouzaMartin D'Souza
Director of Innovation, Insum Solutions
Alberta, Canada


Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, Texas


Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
CTO, eProseed
Riyadh, Saudi Arabia


Oracle ACE Alfredo KreigAlfredo Krieg
Senior Principal Consultant, Viscosity North America
Dallas, Texas


Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany
Oracle ACE Marco Mischke


Oracle ACE Noriyushi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise Japan
Tokyo, Japan
Oracle ACE Noriyushi Shinoda



Oracle ACE Patrick JolliffePatrick Jolliffe
Manager, Li & Fung Limited
Hong Kong
Oracle ACE Patrick Joliffe


Oracle ACE Phil WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom
Oracle ACE Phil Wilkins


Oracle ACE Syed ZaheerZaheer Syed
Oracle Application Specialist, Tabadul
Riyadh, Saudi Arabia
Oracle ACE Zaheer Syed


Batmunkh Moltov
Chief Technology Officer, Global Data Engineering Co.
Ulaanbaatar, Mongolia
Oracle ACE Associate


Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland
Oracle ACE Associate Flora Barriele



Related Resources

Automating DevSecOps for Java Apps with Oracle Developer Cloud

Mon, 2019-04-22 11:32

Looking to improve your application's security? Automating vulnerability reporting helps you prevent attacks that leverage known security problems in code that you use. In this blog we'll show you how to achieve this with Oracle's Developer Cloud.

Most developers rely on third party libraries when developing applications. This helps them reduce the overall development timelines by providing working code for specific needs. But are you sure that the libraries you are using are secure? Are you keeping up to date with the latest reports about security vulnerabilities that were found in those libraries? What about apps that you developed a while back and are still running but might be using older versions of libraries that don't contain the latest security fixes?

DevSecOps aims to integrate security aspects into the DevOps cycle, ideally automating security checks as part of the dev to release lifecycle. The latest release of Oracle Developer Cloud Service - Oracle's cloud based DevOps and Agile team platform - includes a new capability to integrate security check into your DevOps pipelines.

Relying on the public National Vulnerability Database, the new dependency vulnerability analyzer scans the libraries used in your application against the database of known issues, and flags any security risks your app might have based on this data. The current version of DevCS support this for any Maven based Java project. Leveraging the pom files as a source of truth for the list of libraries used in your code.

Vulnerability Analyzer Step

When running the check, you can specify your level of tolerance to issues - for example defining that you are ok with low risk issues, but not with medium to high risk vulnerabilities. When a check finds issues you can fail the build pipeline, send notifications, and in addition add an issue into the issue tracking system provided for free with Developer Cloud.

Check out this demo video to see the process in action.

Having these type of vulnerability scans applied to your platform can save you from situation where hackers leverage publicly known issues and out of date libraries usage to break into your systems. These checks can be part of your regular build cycle, and can also be scheduled to run on a regular basis on systems that have already been deployed - to verify that we keep them up to date with the latest security checks.


Economics and Innovations of Serverless

Fri, 2019-04-19 13:08

The term serverless has been one of the biggest mindset changes since the term cloud, and learning how to “think serverless” should be part of every developers cloud-native journey. This is why one of Oracle’s 10 Predictions for Developers in 2019 is “The Economics of Serverless Drives Innovation on Multiple Fronts”. Let’s unpack what we mean by economics and innovation while covering a few common misconceptions.

The Economics

Cost is only part of the story

I often hear “cost reduction” as a key driver of serverless architectures. Everyone wants to save money and be a hero for their organization. Why pay for a full time server when you can pay per function millisecond? The ultimate panacea of utility computing — pay for exactly what you need and no more. This is only part of the story.

Economics is a broad term for the production, distribution, and consumption of things. Serverless is about producing software. And software is about using computers as leverage to produce non-linear value. Facebook (really MySpace) leveraged software to change the way the world connected. Uber leveraged software to transform the transportation industry. Netflix leveraged software to change the way the world consumed movies. Software is transforming every major company in every major industry, and for most, is now at the heart of how they deliver value to end users. So why the fuss about serverless?

Serverless is About Driving Non-Linear Value

Because serverless is ultimately about driving non-linear business value which can fundamentally change the economics of your business. I’ve talked about this many times , but Ben nails it — “serverless is a ladder. You’re climbing to some nirvana where you get to deliver pure business value with no overhead.”

Pundits point out that “focus on business value” has been said many times over the years, and they’re right. But every software architecture cycle learns from past cycles and incorporates new ways to achieve this goal of greater focus, which is why serverless is such an important cycle to watch. It effectively incorporates the promise (and best) of cloud with the promise (and learnings) of SOA .

Ultimately the winning businesses reduce overhead while increasing value to their customers by empowering their developers. That’s why the economics are too compelling to ignore. Not because your CRON job server goes from $30 to $0.30/month (although a nice use case), but because creating a culture of innovation and focus on driving business value is a formula for success.

So we can’t ignore the economics. Let’s move to the innovations.

The Innovations

The tech industry is in constant motion. Apps, infrastructure, and the delivery process drive each other forward together in a ping-pong fashion. Here are a few of the key areas to watch that are contributing to forward movement in the innovation cycle, as illustrated in the “Digital Trialectic”:

Depth of Services

The web is fundamentally changing how we deliver services. We’re moving towards an “everything-as-a-service” world where important bits of functionality can be consumed by simply calling an API. Programming is changing, and this is driven largely by the depth of available services to solve problems that once plagued developers working hours.

Twilio now removes the need for SMS, voice, and now email (acquired Sendgrid) code and infrastructure. Google’s Cloud Vision API removes the need for complex object and facial detection code and infrastructure. AWS’s Ground Station removes the need for satellite communications code and infrastructure (finally?), and Oracle’s Autonomous Database replaces your existing Oracle Database code and infrastructure.

Pizzas, weather, maps, automobile data, cats – you have an endless list of things accessible across simple API calls.

Open Source

As always, serverless innovation is happening in the world of open source as well, many of which end up as part of the list of services above. The Fn Project is fully open source code my team is working on which will allow anyone to run their own serverless infrastructure on any cloud, starting with Functions-as-a-service and moving towards things like workflow as well. Come say hi in our Slack.

But you can get to serverless faster with the managed Fn service, Oracle Functions. And there are other great industry efforts as well including Knative by Google, OpenFaas by Alex Ellis, and OpenWhisk by IBM.

All of these projects focus mostly on the compute aspect of a serverless architecture. There are many projects that aim to make other areas easier such as storage, networking, security, etc, and all will eventually have their own managed service counterparts to complete the picture. The options are a bit bewildering, which is where standards can help.


With a paradox of choice emerging in serverless, standards aim to ease the pain in providing common interfaces across projects, vendors, and services. The most active forum driving these standards is the Serverless Working Group, a subgroup of the Cloud Native Compute Foundation. Like cats and dogs living together, representatives from almost every major vendor and many notable startups and end users have been discussing how to “harmonize” the quickly-moving serverless space. CloudEvents has been the first major output from the group, and it’s a great one to watch. Join the group during the weekly meetings, or face-to-face at any of the upcoming KubeCon’s.

Expect workflow, function signatures, and other important aspects of serverless to come next. My hope is that the group can move quickly enough to keep up with the quickly-moving space and have a material impact on the future of serverless architectures, further increasing the focus on business value for developers at companies of all sizes.

A Final Word

We’re all guilty of skipping to the end in long posts. So here’s the net net: serverless is the next cycle of software architecture, its roots and learnings coming from best-of SOA and cloud. Its aim is to change the way in which software is produced by allowing developers to focus on business value, which in turn drives non-linear business value. The industry is moving quickly with innovation happening through the proliferation of services, open source, and ultimately standards to help harmonize this all together.

Like anything, the best way to get started is to just start. Pick your favorite cloud, and start using functions. You can either install Fn manually or sign up for early access to Oracle Functions.

If you don’t have an Oracle Cloud account, take a free trial today.

Creating A Microservice With Micronaut, GORM And Oracle ATP

Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:


We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.

Oracle ACEs at APEX Connect 2019, May 7-9 in Bonn

Thu, 2019-04-18 11:36

APEX Connect 2019, the annual conference organized by DOAG (the German Oracle Applications User Group) will be held May 7-9, 2019 in Bonn, Germany. The event features a wide selection of sessions and events, covering APEX, PL and PL/SQL, and JavaScript.  Among the session speakers are the following members of the Oracle ACE Program:

Oracle ACE Director Nils de BruijinNiels de Bruijn
Business Unit Manager APEX, MT AG
Cologne, Germany




Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands



Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy




Oracle ACE Director John Edward ScottJohn Edward Scott
Founder, APEX Evangelists
West Yorkshire, United Kingdom



Oracle ACE Director Kamil StawiarskiKamil Stawiarski
Owner/Partner, ORA-600
Warsaw, Poland



Oracle ACE Director Martin WidlakeMartin Widlake
Database Architect and Performance Specialist, ORA600
Essex, United Kingdom



Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands



Oracle ACE Tobias ArnholdTobias Arnhold
Freelance APEX Developer, Tobias Arnhold IT Consulting



Oracle ACE Dietmar AustDietmar Aust
Owner, OPAL UG
Cologne, Germany



Oracle ACE Kai DonatoKai Donato
Senior Consultant for Oracle APEX Development, MT AG
Cologne, Germany



Oracle ACE Daniel HochleitnerDaniel Hochleitner
Freelance Oracle APEX Developer and Consultant
Regensburg, Germany



Oracle ACE Oliver LemmOliver Lemm
Business Unit Manager, MT AG
Cologne, Germany



Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands



Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany



Oracle ACE Matt MulvaneyMatt Mulvaney
Senior Development Consultant, Explorer UK LTD
Leeds, United Kingdom



Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda, Netherlands



Oracle ACE Phillip SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zürich, Switzerland



Oracle ACE Sven-Uwe WellerSven-Uwe Weller
Syntegris Information Solutions GmbH



Oracle ACE Associate Carolin HagemannCarolin Hagemann
Hagemann IT Consulting
Hamburg, Germany



Oracle ACE Associate Moritz KleinMoritz Klein
Senior APEX Consultant, MT AG
Frankfurt, Germany


Additional Resources

Developers Decide One Cloud Isn’t Enough

Wed, 2019-04-17 08:00


Developers have significantly greater choice today than even just a few years ago, when considering where to build, test and host their services and applications, deciding which clouds to move existing on-premises workloads to, and which of the multitude of open source projects to leverage. So why, in this new era of empowered developers and expanding choice, have so many organizations pursued a single cloud strategy?  The proliferation of new, cloud native open source projects and cloud service providers over recent years who have added capacity, functionality, tools, resources and services, has resulted in better performance, different cost models, and more choice for developers and DevOps engineers, while increasing competition among providers. This is leading into a new era of cloud choice, where the new norm will be dominated by a multi-cloud and hybrid cloud model.

As new cloud native design and development technologies like Kubernetes, serverless computing, and the maturing discipline of microservices emerge, they help accelerate, simplify, and expand deployment and development options. Users have the ability to leverage new technologies with their existing designs and deployments, and the flexibility they afford expands users’ option to run on many different platforms. Given this rapidly changing cloud landscape, it is not surprising that hybrid cloud and multi cloud strategies are being adopted by an increasing number of companies today. 

For a deeper dive into Prediction #7 of the 10 Predictions for Developers in 2019 offered by Siddhartha Agarwal, “Developers Decide One Cloud Isn’t Enough”, we look at the growing trend for companies and developers to choose more than one cloud provider. We’ll examine a few of the factors they consider, the needs determined by a company’s place in the development cycle, business objectives, and level of risk tolerance, and predict how certain choices will trend in 2019 and beyond.


Different Strokes

We are in a heterogeneous IT world today. A plethora of choice and use cases, coupled with widely varying technical and business needs and approaches to solving them, give rise to different solutions. No two are exactly the same, but development projects today typically fall within the following scenarios.

A. Born in the cloud development – these suffer little to no constraint imposed by existing applications; it is highly efficient and cost-effective to begin design in the cloud. They are naturally leveraging containers and new open source development tools like serverless (https://fnproject.io/) or service mesh platforms (e.g., Istio)  A decade ago, startup costs based on datacenter needs alone were a serious barrier to entry for budding tech companies – cloud computing has completely changed this.

B. On premises development moving to cloud – enterprises in this category have many more factors to consider. Java teams for example are rapidly adopting frameworks like Helidon and GraalVM to help them move to a microservice architecture and migrate applications to the cloud. But will greenfield development projects start only in cloud? Do they migrate legacy workloads to cloud? How do they balance existing investments with new opportunities? And what about the interface between on-premises and cloud?

C. Remaining mostly on premises but moving some services to cloud – options are expanding for those in this category. A hybrid cloud approach has been expanding, and we predict will continue to expand, over the course of at least the next few years.  The cloud native stacks available on premises now mirror the cloud native stacks in the cloud thus enabling a new generation of hybrid cloud use cases. An integrated and supported cloud native framework that spans on premises and cloud options delivers choice once again.  And, security, privacy and latency concerns will dictate some of their unique development project needs.


If It Ain’t Broke, Don’t Fix It?

IT investments are real. Inertia can be hard to overcome. Let’s look at the main reasons for not distributing workloads across multiple clouds.  

  • Economy of scale tops the list, as most cloud providers will offer discounts for customers who go all in; larger workloads on one cloud provide negotiating leverage.
  • Development staff familiarity with one chosen platform makes it easier to bring on and train new developers; ramp time to productivity increases.
  • Custom features or functionality unique to the main cloud provider may need to be removed or redesigned in moving to another platform. Even on supposedly open platforms, developers must be aware of the not-so-obvious features impacting portability.
  • Geographical location of datacenters for privacy and/or latency concerns in less well-served areas of the world may also inhibit choice, or force uncomfortable trade-offs.
  • Risk mitigation is another significant factor, as enterprises seek to balance conflicting business needs with associated risks. Lean development teams often need to choose between taking on new development work vs modernizing legacy applications, when resources are scarce.

Change is Gonna Do You Good

These are valid concerns, but as dev teams look more deeply into the robust services and offerings emerging today, the trend is to diversify.

The most frequently cited concern is that of vendor lock-in. This counter-argument to that of economy of scale says that the more difficult it is to move your workloads off of one provider, the less motivated that vendor is to help reduce your cost of operations. For SMBs (small to mid-sized businesses) without a ton of leverage in comparison to large enterprises, this can be significant. Ensuring portability of workloads is important. A comprehensive cloud native infrastructure is imperative here – one that includes container orchestration but also streaming, CI/CD, and observability and analysis (e.g, Prometheus and Grafana). Containers and Kubernetes deliver portability, provided your cloud vendor uses unmodified open source code. In this model, a developer can develop their web application on their laptop, push it into a CI/CD system on one cloud, and leverage another cloud for managed Kubernetes to run their container-based app. However, the minute you start using specific APIs from the underlying platform, moving to another platform is much more difficult. AWS Lambda is one of many examples.

Mergers, acquisitions, changing business plans or practices, or other unforeseen events may impact a business at a time when they are not equipped to deal with it. Having greater flexibility to move with changing circumstances, and not being rushed into decisions, is also important. Consider for example, the merger of an organization that uses an on-premises PaaS, such as OpenShift, merging with another organization that has leveraged the public cloud across IaaS, PaaS and SaaS. It’s important to choose interoperable technologies to anticipate these scenarios.

Availability is another reason cited by customers. A thoughtfully designed multi-cloud architecture not only offers potential negotiating power as mentioned above, but also allows for failover in case of outages, DDoS attacks, local catastrophes, and the like. Larger cloud providers with massive resources and proliferation of datacenters and multiple availability domains offer a clear advantage here, but it also behooves the consumer to distribute risk across not only datacenters, but over several providers.

Another important set of factors is related to cost and ROI. Running the same workload on multiple cloud providers to compare cost and performance can help achieve business goals, and also help inform design practices.  

Adopting open source technologies enables businesses to choose where to run their applications based on the criteria they deem most important, be they technical, cost, business, compliance, or regulatory concerns. Moving to open source thus opens up the possibility to run applications on any cloud. That is, any CNCF-certified Kubernetes managed cloud service can safely run Kubernetes – so enterprises can take advantage of this key benefit to drive a multi-cloud strategy.

The trend in 2019 is moving strongly in the direction of design practices that support all aspects of a business’s goals, with the best offers, pricing and practices from multiple providers. This direction makes enterprises more competitive – maximally productive, cost-effective, secure, available, and flexible regarding platform choice.


Design for Flexibility

Though having a multi-cloud strategy seems to be the growing trend, it does come with some inherent challenges. To address issues like interoperability among multiple providers and establishing depth of expertise with a single cloud provider, we’re seeing an increased use of different technologies that help to abstract away some of the infrastructure interoperability hiccups. This is particularly important to developers, who seek the best available technologies that fit their specific needs.

Serverless computing seeks to reduce the awareness of any notion of infrastructure. Consider it similar to water or electricity utilities – once you have attached your own minimal home infrastructure to the endpoint offered by the utility, you simply turn on the tap or light switch, and pay for what you consume. The service scales automatically – for all intents and purposes, you may consume as much output of the utility or service as desired, and the bill goes up and down accordingly. When you are not consuming the service, there is no (or almost no) overhead.  

Development teams are picking cloud vendors based on capabilities they need. This is especially true in SaaS. SaaS is a cloud-based software delivery model with payment based on usage, rather than license or support-based pricing. The SaaS provider develops, maintains and updates the software, along with the hardware, middleware, application software, and security. SaaS customers can more easily predict total cost of ownership with greater accuracy. The more modern, complete SaaS solutions also allow for greater ease of configuration and personalization, and offer embedded analytics, data portability, cloud security, support for emerging technologies, and connected, end-to-end business processes.

Serverless computing not only provides simplicity through abstraction of infrastructure, its design patterns also promote the use of third-party managed services whenever possible. This provides flexibility and allows you to choose the best solution for your problem from the growing suite of products and services available in the cloud, from software-defined networking and API gateways, to databases and managed streaming services. In this design paradigm, everything within an application that is not purely business logic can be efficiently outsourced.

More and more companies are finding it increasingly easy to connect elements together with Serverless functionality for the desired business logic and design goals. Serverless deployments talking to multiple endpoints can run almost anywhere; serverless becomes the “glue” that is used to make use of the best services available, from any provider.

Serverless deployments can be run anywhere, even on multiple cloud platforms. Hence flexibility of choice expands even further, making it arguably the best design option for those desiring portability and openness.



There are many pieces required to deliver a successful multi-cloud approach. Modern developers use specific criteria to validate if a particular cloud is “open” and whether or not it supports a multi-cloud approach. Does it have the ability to

  • extract/export data without incurring significant expense or overhead?
  • be deployed either on-premises or in the public cloud, including for custom applications, integrations between applications, etc.?
  • monitor and manage applications that might reside on-premises or in other clouds from a single console, with the ability to aggregate monitoring/management data?

And does it have a good set of APIs that enables access to everything in the UI via an API? Does it expose all the business logic and data required by the application? Does it have SSO capability across applications?

The CNCF (Cloud Native Computing Foundation) has over 400 cloud provider, user, and supporter members, and its working groups and cloud events specification engage these and thousands more in the ongoing mission to make cloud native computing ubiquitous, and allow engineers to make high-impact changes frequently and predictably with minimal toil.

We predict this trend will continue well beyond 2019 as CNCF drives adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects, and democratizing state-of-the-art patterns to make these innovations accessible for everyone.

Oracle is a platinum member of CNCF, along with 17 other major cloud providers. We are serious about our commitment to open source, open development practices, and sharing our expertise via technical tutorials, talks at meetups and conferences, and helping businesses succeed. Learn more and engage with us at cloudnative.oracle.com, and we’d love to hear if you agree with the predictions expressed in this post. 

Podcast: On the Highway to Helidon

Tue, 2019-04-16 23:00

Are you familiar with Project Helidon? It’s an open source Java microservices framework introduced by Oracle in September of 2018.  As Helidon project lead Dmitry Kornilov explains in his article Helidon Takes Flight, "It’s possible to build microservices using Java EE, but it’s better to have a framework designed from the ground up for building microservices."

Helidon consists of a lightweight set of libraries that require no application server and can be used in Java SE applications. While these libraries can be used separately, using them in combination provides developers with a solid foundation on which to build microservices.

In this program we’ll dig into Project Helidon with a panel that consists of two people who are actively engaged in the project, and two community leaders who have used Helidon in development projects, and have also organized Helidon-focused Meet-Ups.

This program was recorded on Friday, March 8, 2019. So let’s journey through time and space and get to the conversation. Just press play in the widget.

The Panelists Dmitry Kornilov

Dmitry Kornilov
Senior Software Development Manager, Oracle; Project Lead, Project Helidon
Prague, Czech Republic


Tomas Langer

Tomas Langer
Consulting Member of Technical Staff, Oracle; Member of the Project Helidon Team
Prague, Czech Republic


Oracle ACE Associate José Rodrigues

José Rodrigues
Principal Consultant and Business Analyst, Link Consulting; Co-Organizer, Oracle Developer Meetup Lisbon
Lisbon, Portugal


Oracle ACE Phil Wilkins

Phil Wilkins
Senior Consultant, Capgemini; Co-Organizer. Oracle Developer Meetup London
Reading, UK


Relevant Resources

Latest Blog Posts from Oracle ACEs: April 7-13

Tue, 2019-04-16 14:00

Busy as bees, these ACEs have been, keeping the buzz going with another week's worth of posts offering the kind of technical experience and expertise that can help to keep you from getting stung on your next project.

Oracle ACE Director Franck PanchotFrank Panchot
Data Engineer, CERN
Lausanne, Switzerland




Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio Soluções em TI
Santa Catarina, Brazil



Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany



Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise
Tokyo, Japan



Oracle ACE Paul GuerinPaul Guerin
Database Service Delivery Leader, Hewlett-Packard
Twitter LinkedIn


Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Leinster, Ireland



Oracle ACE Rodrigo DeSouzaRodrigo de Souza
Solutions Architect, Innive Inc
Rio Grande do Sul, Brazil



Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio



Oracle ACE Stefan KoehletStefan Koehler
Independent Oracle Performance Consultant and Researcher
Bavaria, Germany




Oracle ACE Yong JingYong Jing
System Architect Manager, Changde Municipal Human Resources and Social Security Bureau
Changde City, China



Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia



Oracle ACE Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin



Related Resources Oracle ACE Guide

Oracle ACE Director Oracle ACE Director: Top-tier community members who engage more closely with Oracle.
Oracle ACE Oracle ACE: Established Oracle advocates who are well known in the community.
Oracle ACE Associate Oracle ACE Associate: Entry point for the Oracle ACE program

Git Branch Protection in Oracle Developer Cloud

Tue, 2019-04-16 11:26

In the April release of Oracle Developer Cloud, we introduced a feature you can use to protect a specific branch of a Git repository hosted by Oracle Developer Cloud. This blog should help you understand the options we introduced.

Who has access to branch protection?

The only one allowed to configure branch protection for a Git repository is the user with the Project Owner role for the project in which the Git repository was created.

Where can we find the branch protection option?

To access this feature, select the Project Administration tab on the left navigation bar and then select the Branches tile in Developer Cloud.  This feature is accessible to a Project Owner, not to a Project Member.


Branch Protection Settings – Getting Started

To get started with setting branch protections, select the Git repository and the branch in the Branches tab. The dropdown lists all the repositories in the project and all the branches created for the selected repository.  In the following screenshot, I selected the NodeJSMicroService.git repository and the master branch.


Branch Protection – Options

There are four options for branch protection:

  • Open
  • Requires Review
  • Private
  • Frozen

By default, every branch of every Git repository is Open.


Branch Protection Options – Details


By default, any branch of a given Git repository has a branch protection type of Open. This means there are no restrictions on the branch. You can still impose two rules without imposing code merge rules by selecting one or both of the following checkboxes:

Do not allow forced pushes: Select this option to ensure that, if there are any merge conflicts, no code can be pushed to the branch using the force push provision in Git.

Do not allow renaming or deleting the branch: Select this option to ensure that nobody can rename or delete the branch. The branch cannot be deleted manually or as part of the merge request.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.


Requires Review

If the Project Owner opts for this branch protection option, the code will be reviewed and approved by the configured reviewer for any push or code merge to take place. This is very useful when it is used with the master branch, to avoid any direct push or code merge to it without prior review. You can configure the reviewer(s) who are part of the project and set the criteria for approval. The Criteria for approval dropdown lets you configure whether an approval is required from all the configured reviewers, just one reviewer, or any two of them.

In addition to the review criteria, there are few other checkboxes that can help provide you with more comprehensive coverage as part of this protection option.

Requires Successful Build:  Select this checkbox to ensure that the review branch to be merged with the selected branch had a successful last build.

Reapproval needed when the branch is updated: Select this checkbox to ensure that, if a change is pushed to a branch after some reviewers have approved the merge request, the merge will only happen after the reviewers reapprove the merge request.

Changes pushed to the target branch must match review content: Select this checkbox to ensure that the reviewed code and the merged code are one and the same.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.


This branch protection option ensures that only user(s) who have been configured or designated as branch owners will be able to push the code to the branch directly. All other users will have to create a merge request to push their code to this branch. This option makes sense when the user(s) have branched the code to work on a fix or enhancement and you want to restrict the ability to push code to this branch to a defined set of people.

Note: A Project Owner may not be a branch owner.

You can also impose two additional rules by selecting one or both of the following checkboxes:

Do not allow forced pushes: Select this checkbox to ensure that, if there are any merge conflicts, no code can be pushed to the branch using the force push provision in Git.

Do not allow renaming or deleting the branch: Select this checkbox to ensure that nobody can rename the branch or delete it manually or as part of the merge request.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.



This probably an easy but crucial branch protection option. As the name suggests, it freezes the branch and prevents any further changes. This option comes in handy during code freeze for the release or master branch. Once a branch has been marked as Frozen, it can only be undone by the Project Owner.

You can save the configuration by clicking the Save button or discard it by clicking the Discard button.

Branch protection can help streamline the Release Management for the project and help enforce best practices in your development process.

To learn more about this and other new features in Oracle Developer Cloud,  take a look at the What's New in Oracle Developer Cloud Service document and the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud slack channel or in the online forum.

Happy Coding!

DevSecOps and Other New Feature in Developer Cloud - April Update

Sat, 2019-04-13 09:27

Over the weekend we rolled out an update to Oracle Developer Cloud - your DevOps and Agile platform in the Oracle Cloud. A key new capability we added is security checks for your Java apps as part of our DevOps pipelines. Read below to learn more about this and other features available for you.

DevSecOps - Automate Code Security Checks

Keeping up to date with the latest information about security risks in your code can be a challenge, but not doing this can put your company and your customers at risk. We all heard stories about hackers leveraging well known security britches that companies forgot to patch. As your application grows this becomes even more challenging as you struggle to keep track of the set of public libraries you are using.

With the new version of Oracle Developer Cloud we add DevSecOps functionality that scans your code against a database of known security vulnerabilities and alerts you if your code is using libraries with known security issues. In our first release we scan Java code for projects that leverage Maven as their build framework.

You can setup automatic jobs to review your code, report the results, and stop the roll out of vulenrable apps. You can also automatically add a task to the issue tracking system in DevCS to help the team keep an eye on fixing the issue.

build setup

The build job will give you a detailed report with links to more information about every vulnerability we identify in your code.

security report

YAML Based Build and Pipeline Definition

Following the "Infrastructure as Code" approach Developer Cloud now allows you to define build jobs and pipelines using YAML files. These files can be stored in the Git repositories in DevCS, and versioned like the rest of your code. DevCS scans your git repositories for these files (specific directory), and creates or updates jobs and pipelines automatically when those are changed (in the master branch). This provides a compliment approach for defining your CI/CD in parallel to the visual approach using the DevCS interface.

Enhanced Git Branch Protection

You now have more capabilities to restrict what changes can be done to specific code branches. New branch types include branches that require review, are private, or frozen. For example, a branch that is marked as "requires review" limits the ability to push changes by requiring code review by other team members, successful build job, and other combinations of conditions.

Branch Restrictions

Deployment Build Step

Up until this release deployment was a separate part from our build pipelines. In the new release we are introducing a new step type in your build jobs that can do deployment to Oracle Cloud services. This allows you to add these deployment-jobs to pipelines in an easier way. In addition we connected our deployment step with the environment feature in DevCS. This way you define the connections to your cloud services only once, and then you are able to reuse them in various jobs.

Deployment Step

To learn more about these and other new features have a look at the "What's New" document and the links it provide for our product documentation. If you have further questions, reach us on the Developer Cloud slack channel or the online forum.

Mark Your Calendars: Big Data Startup Molecula Goes Cloud Native

Thu, 2019-04-11 19:07

In this webcast, Matt Jaffee from Molecula will share his experience getting their Pilosa environment set up with Oracle Cloud Infrastructure. In addition, the Oracle Cloud product team will provide details around four recently announced Cloud Native Services: Resource Manager for Terraform-based automation, Streaming, Monitoring and Notifications. Learn why you should use these services and how you can get started.

Sign up: Webcast registration link

Time: April 23, 2019 10:00 a.m. - 11:00 a.m. PT

Speaker Details:

Matt Jaffee is a Lead Engineer/Developer Evangelist/Sales Engineer/DevOps Engineer and Chief Devourer of Breakfast Tacos (a highly contested title) at Molecula. He came to software development by way of an undergraduate degree in Astrophysics and a Master's in Computer Science. He's worked on networking, desktop apps, GPUs, data pipelines, and web apps, but feels most at home optimizing Pilosa – the open source indexing powerhouse at the heart of Molecula.

Akshai Parthasarathy is originally from India and spent his teenage years in Trinidad and Tobago before coming to study in the US. He worked at large and small companies prior to his current role as a Product Marketer at Oracle Cloud.  He likes being at the intersection of technology and marketing. 

Mark de Visser moved from The Netherlands to the Bay Area in the nineties and has worked in technical, product management and marketing roles since then. One of the early enthusiasts for open source, he played a key role introducing Enterprise Linux at Red Hat, and has been involved with developer and data center technologies ever since. He is now a product manager at Oracle Cloud Infrastructure, focused on the portfolio of cloud native technologies.



Latest Blog Posts from Oracle ACEs: March 31 - April 6, 2019

Thu, 2019-04-11 13:07

From time to time we all could use a little help getting all the pieces to fit. This collection of the latest technical blog posts from members of the Oracle ACE program from around the world might be exactly what you need to solve a persistent problem or complete a project. Or, at the very least, somewhere in this list of 21 posts from 14 ACEs you just might pick up a tip or two that will help you to avoid a migraine-inducing situation.

Get it together.

Oracle ACE Director Franck PanchotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland
Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland
Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA
Minas Gerais, Brazil
Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France
Oracle ACE Marcel HofstettMarcel Hofstetter
CEO/Enterprise Consultant, JomaSoft
St. Gallen, Switzerland
Oracle ACE Marko MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany
Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Hewlett-Packard Japan, Ltd.
Tokyo, Japan
Oracle ACE Peter ScottPeter Scott
Principal and Owner, Sandwich Analytics
Pays de la Loire, France
Oracle ACE Scott WesleyScott Wesley
Systems Consultant/Trainer, SAGE Computing Services
Perth, Australia
Oracle ACE Zaheer SyedZaheer Syed
Oracle Application Specialist, Tabadul
Saudi Arabia
  Oracle ACE Talip Hakan OxtyurkTalip Hakan Ozturk
Co-Founder, Veridata Information Technologies
Istanbul, Turkey
Oracle ACE Yong JingYong Jing
System Architect Manager, Changde Municipal Human Resources and Social Security Bureau
Changde City, China
  Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia
Oracle ACE Associate Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada
Additional Resources