tag:blogger.com,1999:blog-7858954534182160752024-03-07T15:23:56.467-08:00Tug's BlogTug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.comBlogger248125tag:blogger.com,1999:blog-785895453418216075.post-68876055055125552142019-09-05T07:10:00.000-07:002019-09-05T07:10:13.612-07:00Multi-Nodes Redis Cluster With Docker
Read this article on my new blog
As part of my on-boarding/training at RedisLabs I continue to play with the product, and I have decided today to install a local 3 nodes cluster of Redis Enterprise Server (RS); and show how easy is to move from a single node/shard database to a multi nodes highly available one.
Once your cluster is up & running, you will kill some containers to see Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-26357783750811257902019-09-03T02:12:00.003-07:002019-09-03T02:27:52.793-07:00Getting Started With Redis Streams & Java
Read this article on my new blog
As you may have seen, I have joined Redis Labs a month ago; one of the first task as a new hire is to learn more about Redis. So I learned, and I am still learning.
This is when I discovered Redis Streams. I am a big fan of streaming-based applications so it is natural that I start with a small blog post explaining how to use Redis Streams and Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-53242810588328161812017-08-08T01:15:00.001-07:002017-08-08T01:16:16.096-07:00Getting started with MapR-DB Table Replication
Read & comment this article on my new blog
Introduction
MapR-DB Table Replication allows data to be replicated to another table that could be on on the same cluster or in another cluster. This is different from the automatic and intra-cluster replication that copies the data into different physical nodes for high availability and prevent data loss.
This tutorial focuses on the Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-24993957172089205342017-01-20T10:30:00.000-08:002017-08-08T01:15:53.043-07:00Getting Started With Kafka REST Proxy for MapR Streams
Read & comment this article on my new blog
Introduction
MapR Ecosystem Package 2.0 (MEP) is coming with some new features related to MapR Streams:
Kafka REST Proxy for MapR Streams provides a RESTful interface to MapR Streams and Kafka clusters to consume and product messages and to perform administrative operations.
Kafka Connect for MapR Streams is a utility for streaming data Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-42284497616495036632017-01-04T04:08:00.001-08:002017-01-04T04:08:36.338-08:00Getting Started with MQTT and Java
Read & comment this article on my new blog
MQTT (MQ Telemetry Transport) is a lightweight publish/subscribe messaging protocol.
MQTT is used a lot in the Internet of Things applications, since it has been designed to
run on remote locations with system with small footprint.
The MQTT 3.1 is an OASIS standard, and you can find all the information at http://mqtt.org/
This article will Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-58347410785363037502016-10-11T20:33:00.003-07:002016-10-12T01:30:40.861-07:00Getting started with Apache Flink and Kafka
Read this article on my new blog
Introduction
Apache Flink is an open source platform for distributed stream and batch data processing. Flink is a streaming data flow engine with several APIs to create data streams oriented application.
It is very common for Flink applications to use Apache Kafka for data input and output. This article will guide you into the steps to use Apache Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-6658157511057604962016-09-26T15:30:00.000-07:002016-10-11T20:32:09.290-07:00Streaming Analytics in a Digitally Industrialized World
Read this article on my new blog
Get an introduction to streaming analytics, which allows you real-time insight from captured events and big data. There are applications across industries, from finance to wine making, though there are two primary challenges to be addressed.
Did you know that a plane flying from Texas to London can generate 30 million data points per flight? As Jim Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-10366986043445534562016-09-01T11:30:00.000-07:002016-10-11T20:27:36.756-07:00Setting up Spark Dynamic Allocation on MapR
Read this article on my new blog
Apache Spark can use various cluster manager to execute application (Stand Alone, YARN, Apache Mesos). When you install Apache Spark on MapR you can submit application in a Stand Alone mode or using YARN.
This article focuses on YARN and Dynamic Allocation, a feature that lets Spark add or remove executors dynamically based on the workload. You can Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-30449052762176338072016-03-31T00:06:00.000-07:002016-03-31T00:06:51.069-07:00Save MapR Streams messages into MapR DB JSON
Read this article on my new blog
In this article you will learn how to create a MapR Streams Consumer that saves all the messages into a MapR-DB JSON Table.
Install and Run the sample MapR Streams application
The steps to install and run the applications are the same as the one defined in the following article:
MapR Streams application
Once you have the default producer and Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-62850942548357773552016-03-10T02:18:00.000-08:002016-03-31T00:07:03.052-07:00Getting Started with MapR Streams
Read this article on my new blog
You can find a new tutorial that explains how to deploy an Apache Kafka application to MapR Streams, the tutorial is available here:
Getting Started with MapR Streams
MapR Streams is a new distributed messaging system for streaming event data at scale, and it’s integrated into the MapR converged platform.
MapR Streams uses the Apache Kafka API, so Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-32252530853729813482016-02-10T02:02:00.000-08:002016-03-30T02:03:49.991-07:00Getting Started With Sample Programs for Apache Kafka 0.9
Read this article on my new blog
Ted Dunning and I have worked on a tutorial that explains how to write your first Kafka application. In this tutorial you will learn how to:
Install and start Kafka
Create and Run a producer and a consumer
You can find the tutorial on the MapR blog:
Getting Started with Sample Programs for Apache Kafka 0.9
Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-56571712240221902802015-12-10T02:56:00.001-08:002015-12-10T02:59:39.065-08:00Using Apache Drill REST API to Build ASCII Dashboard With Node
Read this article on my new blog
Apache Drill has a hidden gem: an easy to use REST interface. This API can be used to Query, Profile and Configure Drill engine.
In this blog post I will explain how to use Drill REST API to create ascii dashboards using Blessed Contrib.
The ASCII Dashboard looks like
Prerequisites
Node.js
Apache Drill 1.2
For this post, you will use the SFO Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com2tag:blogger.com,1999:blog-785895453418216075.post-8039666755647292732015-08-18T07:44:00.000-07:002015-08-18T07:44:00.113-07:00Convert CSV file to Apache Parquet... with Drill
Read this article on my new blog
A very common use case when working with Hadoop is to store and query simple files (CSV, TSV, ...); then to get better performance and efficient storage convert these files into more efficient format, for example Apache Parquet.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem. Apache Parquet has the following Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-4827910611700859702015-07-21T10:04:00.000-07:002015-07-21T10:04:06.678-07:00Apache Drill : How to Create a New Function?
Read this article on my new blog
Apache Drill allows users to explore any type of data using ANSI SQL. This is great, but Drill goes even further than that and allows you to create custom functions to extend the query engine. These custom functions have all the performance of any of the Drill primitive operations, but allowing that performance makes writing these functions a little trickier Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com1tag:blogger.com,1999:blog-785895453418216075.post-7164430706608688932015-02-04T10:12:00.003-08:002015-02-04T10:12:44.157-08:00Introduction to MongoDB Security
View it on my new blog
Last week at the Paris MUG, I had a quick chat about security and MongoDB, and I have decided to create this post that explains how to configure out of the box security available in MongoDB.
You can find all information about MongoDB Security in following documentation chapter:
http://docs.mongodb.org/manual/security/
In this post, I won't go into the detail about Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com2tag:blogger.com,1999:blog-785895453418216075.post-41676939769697024962015-02-01T07:01:00.000-08:002015-02-01T20:31:44.813-08:00Moving My Beers From Couchbase to MongoDB
See it on my new blog : here
Few days ago I have posted a joke on Twitter
Moving my Java from Couchbase to MongoDB pic.twitter.com/Wnn3pXfMGi
— Tugdual Grall (@tgrall) January 26, 2015
So I decided to move it from a simple picture to a real project. Let’s look at the two phases of this so called project:
Moving the data from Couchbase to MongoDB
Updating the application code to use Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com2tag:blogger.com,1999:blog-785895453418216075.post-25080200714535927522015-01-23T02:23:00.001-08:002015-01-23T07:04:32.593-08:00Everybody Says “Hackathon”!
TLTR:
MongoDB & Sage organized an internal Hackathon
We use the new X3 Platform based on MongoDB, Node.js and HTML to add cool features to the ERP
This shows that “any” enterprise can (should) do it to:
look differently at software development
build strong team spirit
have fun!
Introduction
I have like many of you participated to multiple Hackathons where developers, designer and Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com2tag:blogger.com,1999:blog-785895453418216075.post-27351497197125382322015-01-20T22:01:00.001-08:002015-01-23T06:16:06.339-08:00Nantes MUG : Event #2Last night the Nantes MUG (MongoDB Users Group) had its second event. More than 45 people signed up and joined us at the Epitech school (thanks for this!). We were lucky to have 2 talks from local community members:
How “MyScript Cloud” uses MongoDB by Mathieu Ruellan
Aggregation Framework by Sebastien Prunier
How “MyScript Cloud” uses MongoDB
First of all, if you do not know MyScriptTug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-63441693453269937702015-01-12T07:30:00.000-08:002015-01-12T07:30:01.687-08:00How to create a pub/sub application with MongoDB ? IntroductionIn this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, ... ).
So, what needs to be done to achieve such thing:
an application "publish" a message. In our case, we simply save a document into MongoDB
another application, or thread, subscribe to these events and will received Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com2tag:blogger.com,1999:blog-785895453418216075.post-87494486709787175192014-11-25T07:27:00.000-08:002014-11-25T07:27:45.098-08:00Big Data... Is Hadoop the good way to start?
In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing.
TL;TR
You believe that you have a big data project?
Do not start with the installation of an Hadoop Cluster -- the "how"
Start to talk to business people to understand their problem -- the "why"
Understand the data you must Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com4tag:blogger.com,1999:blog-785895453418216075.post-59598346171121949812014-08-21T14:30:00.000-07:002014-08-21T14:30:00.032-07:00Introduction to MongoDB Geospatial feature
This post is a quick and simple introduction to Geospatial feature of MongoDB 2.6 using simple dataset and queries.
Storing Geospatial Informations
As you know you can store any type of data, but if you want to query them you need to use some coordinates, and create index on them. MongoDB supports three types of indexes for GeoSpatial queries:
2d Index : uses simple coordinate (longitude, Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com14tag:blogger.com,1999:blog-785895453418216075.post-40438383049251006032014-03-28T07:00:00.000-07:002014-03-29T04:21:04.719-07:00db.person.find( { "role" : "DBA" } )Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions.
Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see thatTug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-82748566939916137032013-10-01T01:00:00.002-07:002013-10-08T06:28:47.023-07:00Pagination with CouchbaseIf you have to deal with a large number of documents when doing queries against a Couchbase cluster it is important to use pagination to get rows by page. You can find some information in the documentation in the chapter "Pagination", but I want to go in more details and sample code in this article.
For this example I will start by creating a simple view based on the beer-sample dataset, the Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com6tag:blogger.com,1999:blog-785895453418216075.post-88015425284003409852013-07-18T06:59:00.001-07:002013-07-18T06:59:23.568-07:00How to implement Document Versioning with Couchbase
Introduction
Developers are often asking me how to "version" documents with Couchbase 2.0. The short answer is: the clients and server do not expose such feature, but it is quite easy to implement.
In this article I will use a basic approach, and you will be able to extend it depending of your business requirements.
Design
The first thing to do is to select how to "store/organize" Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0tag:blogger.com,1999:blog-785895453418216075.post-19575620621724822013-07-11T05:47:00.000-07:002013-07-11T05:47:50.475-07:00Deploy your Node/Couchbase application to the cloud with Clever Cloud
Introduction
Clever Cloud is the first PaaS to provide Couchbase as a service allowing developers to run applications in a fully managed environment. This article shows how to deploy an existing application to Clever Cloud.
I am using a very simple Node application that I have documented in a previous article: “Easy application development with Couchbase, Angular and Node”.
Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0