Blog

What’s a REST API?

To understand what a REST API is, we must first understand what a web service is in general.

Web Services

Web services are structures that allow methods used by applications to be run on servers and exchange values over the web. This way we can write methods that can be used in different applications or with different technologies and run them independent of the platform.

Web services are used in two distinct ways:

  1. Tasks that are repetitive. Instead of running these tasks constantly on client applications, we can run them on web services and call them from clients when needed. For example: A program that displays stock market data.
  2. Data exchange between applications running on different platforms. For example: A news app that serves both Android and iOS.

The foundation of web service architecture is built on top of HTTP. Generally, a request arrives to a web service, the service processes this request and returns a response. SOAP and REST are two different standards for a web service to achieve this. Let’s talk briefly about SOAP:

Simple Object Access Protocol (SOAP), is an XML based protocol for applications to do data exchange over HTTP. Data is sent to the web service as XML, this service interprets the data and sends the result as XML.

REpresentational State Transfer (REST) on the other hand is a newer protocol for data exchange over HTTP. It is simpler than it’s alternative SOAP, and it has less overhead data which helps with sending smaller packages. Services that are written to fit REST standards are also frequently called RESTful services. One of the biggest reasons for their development is that restful services use already existing HTTP methods for communication between clients and servers, rather than create it’s own complicated architecture like SOAP or RPC (Reduce Procedure Call). Requests are done completely over HTTP methods without any implicit rules. For example, if a method was called with a POST request with the name of getProductName in a SOAP service; in a similar REST service it would be called with a GET request, with the name of products/name/{id}. Additionally, REST services don’t have to always return XML data. They can return data in any format e.g JSON, XML, TXT, HTML.

REST API, is the name of the structure that allows methods of a server application to be usable by other applications over the web. This can be used for Mobile Application, Cloud-Based Services, Legacy Applications, Application Servers, Data Exchange, Web Applications, Cloud Resources, Partner Application, and many more.


The most commonly used HTTP protocols in REST applications are the GET, POST, PUT, and DELETE operations. Most REST APIs can be created with just these 4 fundamental HTTP protocols.

  • GET: As the name implies, it’s mostly used to receive data.
  • POST: Used to send new data.
  • PUT: Used to update existing data.
  • DELETE: Again, as the name implies, used to delete data.

For a simple example application, imagine we have a client database application. We hold records of customers and run methods on these records. These methods allow us to create, update, and delete customers. If we were to build a REST API around this, we’d get the following methods:

URLMethodParameters
/customers/{customerId}GET{customerId}
/customersPOST{customer}
/customers/{customerId}PUT{customerId}
/customers/{customerId}DELETE{customerId}, {customer}

Resources and URIs in REST Services

In REST everything is done with resources. Resources are defined by their URI and they can point to a method or variable. HTTP requests are made to web services with their URIs to call methods. URIs are structured as below:


REST

There are two types of URIs in REST. One of these is a Collection URI, and the other is an Element URI. Collection URIs are used for data types like arrays and lists. For example: http://api.example.com/resources

  • GET: Used in listing the URIs or details of given collection.
  • PUT: Used to replace a whole collection with another collection.
  • POST: Used to created a new collection.
  • DELETE: Used to delete the whole collection.

Element URIs on the other hand are used for processing variables. For example: http://api.example.com/resources/item/17

  • GET: Used to get the object with the given address.
  • PUT: Used to change an existing object or add a new one if it doesn’t exist.
  • POST: Used to create a new object. Always creates a new object.
  • DELETE: Used to delete the object with the given address.

All of the following URIs are also examples of Element URIs:

/customers/customerId/25 /users/username/45

/products/product/FF45A3

/weatherForecast/{state}/{city}?date={date}

What is CORS?

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell a browser to let a web application running at one origin (domain) have permission to access selected resources from a server at a different origin. A web application makes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, and port) than its own origin.

How does CORS work?

The CORS standard manages cross-origin requests by adding new HTTP headers to the standard list of headers. The Access-Control-Allow-Origin header allows servers to specify how their resources are shared with external domains. When a GET request is made to access a resource on a server, that server will respond with a value for the Access-Control-Allow-Origin header. Many times, this value will be *, meaning that the server will share the requested resources with any domain on the Internet.

What is a Preflight Request?

“Preflighted” requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data.

Let’s examine the example in the below image where CORS headers are used for data transfer between a client and server.

CORS

The headers in the request for our example are:

  • Origin: The origin of the cross-site access request or preflight request.
  • Access-Control-Request-Method: Lets the server know what HTTP method will be used when the actual request is made.
    • In our example it’s the POST method.
  • Access-Control-Request-Headers: Lets the server know what HTTP headers will be used when the actual request is made.
    • In our example they’re X-PINGOTHER and Content-Type.

The headers that are sent in response to the preflight request are:

  • Access-Control-Allow-Origin: Has value that was sent in the request’s Origin header. The value of this header can also be *, which would mean the resource can be accessed by any domain.
    • In our example it’s same with the request’s origin: Server-b.com.
  • Access-Control-Allow-Methods: Specifies the method or methods allowed when accessing the resource.
    • In our example those methods are POST, GET, and OPTIONS.
  • Access-Control-Allow-Headers: Lists HTTP headers that can be used when making the actual request.
    • In our example those headers are X-PINGOTHER and Content-Type. (same with the client’s Access-Control-Request-Headers header).
  • Access-Control-Max-Age: The value in seconds for how long the response to the preflight request can be cached for without sending another preflight request.
    • In our example this value is 86400, which is equivalent to 24 hours.
Data Filtering with JSONPath

What is JSONPath

JSONPath is a tool based on XPath that allows us to do with JSON what XPath allows us to do with XML. You might be asking: “What does XPath allow us to do for XML that we want to be able to do the same with JSON?” XPath (and by extension, JSONPath), is a small utility that gives us an easy to use/ understand syntax for picking certain parts of a given XML (or JSON) data. We can liken it to RegEx although their purpose and results are very different. Both take a data and a ruleset and give us data that we’re looking for. We can think of the ruleset we’re giving as a very simple scripting language (but not a DSL, don’t be scared). It is more like linux’ grep utility than a full-blown scripting language like python. JSONPath does this job not for XML, but for it’s much popular little cousin, JSON, who is overtaking XML on most of the web.

The problems it can solve are said by it’s creator Stefan Goessner as:

  • Data may be interactively found and extracted out of JSON structures on the client without special scripting.
  • JSON data requested by the client can be reduced to the relevant parts on the server, such minimizing the bandwidth usage of the server response.

JSONPath Expressions

There are 2 notations we can use while creating JSONPath expressions. One of these is the dot notation that’s closer to the C family of languages: $.store.book[0].title, and the other is the bracket notation that likens JSON data to arrays: $['store']['book'][0]['title'] (Notice the single quotes , they’re very important because some old and widespread versions of JSONPath don’t work with double quotes!). The dollar sign at the beginning of both expressions signifies the root of the JSON data.

Knowing the difference between these two notations is the simple part, and as we’re about to see, they can be used in a mixed fashion. The real advantages JSONPath give us are the logical operations we can do over the data. We can list these operations as follows.

  • Being able to use wildcards (*) for member names and array indexes.
  • The descendant operator which allows us to climb back one object.
  • Using Python’s array slicing ([start:end:step]) to be able to do selections on arrays.
  • Being able to access the data of the current object with the @ operator.
  • Being able to do logical filtering using special expressions in parenthesis.

The last two points above are JSONPath’s biggest strengths, so I’ll talk about them a bit more. We can use objects’ values against them using expressions in parenthesis:

$.store.book[(@.length - 1)].title

That JSONPath expression would give us the last element of the book array. Notice the parenthesis and the @ operator that represents the array.

Again, while using the parenthesis and @ operator, we can do logical filtering by putting a question mark at the beginning of the parenthesis.

$.store.book[?(@.price < 10)].title

That expression would return the titles of the books whose price are under 10. If we wanted to do such filtering with the JSON libraries of a language like Java or Python, we’d have to write some really long code for it. If the person reading the expression is also familiar with JSONPath, understanding a JSONPath expression is much easier than following bunch of for loops and if groups.

Useful Resources

JSONPath has even more operators and functions then I have mentioned here, with countless examples for all. If you visit this page, which is the original source of JSONPath, you can find a list of all operators, plus plenty of examples. Additionally, there is a very popular Java implementation on Github.

Using Apinizer’s allowed hours policy, you are able to limit the hours a certain client can send access your API. This can be useful if you want your API to be accessible only during certain hours of the day (e.g when banks are open). To follow this tutorial, you should already have Apinizer installed and an API gateway defined.

Getting Started

For starters, go to your API’s gateway screen and click the “Add policy” button.

Policy

This will bring up a list of available policies. You’ll want to select “Allowed Hours” from that list. Once you do, another dialog will open to ask you if you want to create this policy as a local or global policy. Local policies are kept with that gateway and can only be applied to that gateway. Global policies on the other hand are kept separately from a gateway, and can be used across many gateways. For our example, we’re going to create a local policy.

Defining the Policy

Once you select the “Local Policy” option, Apinizer will need you to determine the intervals where access is allowed to your API. You can add as many intervals as you want through the big plus (+) button.

Hours

 Example Image: Allowed hours for when most banks are open
Optionally, you can configure the error message that will be returned when a client is rejected. To do this, simply expand the section labeled “Error Message Customization” and enter your own error body in JSON format. You can also set the HTTP return code if you want to return a specific code.

After you add all the time intervals you need, you can save the policy and redeploy the gateway. You should now have a working allowed hours policy for your API.

Documentation

Apinizer’s documentation also features a page detailing the Allowed Hours policy. You can find additional details about the policy at that page:

https://apinizer.com/docs/en/gw-management/policies/allowed-hours/

Using Apinizer’s max message size policy, you are able to limit the maximum size of message bodies clients can send in their requests to your API. This can be used to reduce traffic, control data, or to protect your API against spam. To follow this tutorial, you should already have Apinizer installed and an API gateway defined.

Getting Started

For starters, go to your API’s gateway screen and click the “Add policy” button.

Policy

Policy

This will bring up a list of available policies. You’ll want to select “Max Message Size” from that list. Once you do, another dialog will open to ask you if you want to create this policy as a local or global policy. Local policies are kept with that gateway and can only be applied to that gateway. Global policies on the other hand are kept separately from a gateway, and can be used across many gateways. For our example, we’re going to create a local policy.

Defining the Policy

Once you select the “Local Policy” option, Apinizer will need you to determine the maximum message size. Any message who’s body size exceeds the maximum size will be rejected (Body size must be smaller or equal to max size).

Optionally, you can configure the error message that will be returned when a client is rejected. To do this, simply expand the section labeled “Error Message Customization” and enter your own error body in JSON format. You can also set the HTTP return code if you want to return a specific code.

After the max size is set, you can save the policy and redeploy the gateway. You should now have a working max message size for your API.

Documentation

Apinizer’s documentation also features a page detailing the Max Message Size policy. You can find additional details about the policy at that page:

https://apinizer.com/docs/en/gw-management/policies/max-message-size/

API Logs in Apinizer

Logging is an essential part of any software. Good logs can be a developers biggest weapon when tracing problems in their code. This is especially true in the web, where issues are usually noticed after they occur, and are hard to recreate without knowing the parameters that caused them; which in a public API, are coming from somewhere outside of the developers control. Although, not every log is the same. It’s important to do logging that records enough information, as sometimes a stack trace is just not enough.

Because of this, an Apinizer gateway is configured to log every interaction by default. The state of each and every request is logged as it’s being served. This way you don’t have to spend time on implementing logging inside your own application. You can use Apinizer’s logs to debug issues, see traffic, and analyze anything from traffic statistics to performance metrics.

If you open the Apinizer Manager, and go to API Analytics > API Logs, you can access a basic table of request logs. The API Analytics category has many tools to monitor your gateways and APIs, but we’re going to focus on API Logs as it’s the most straightforward tool. API Logs has each request that was processed by your gateways. With it you can see everything there is to know about a request:

  • Status: The HTTP code of the response (e.g 404, 200)
  • Created Time: When the request arrived
  • Gateway: Which gateway it was processed by
  • Request Address: The IP address of the client
  • Sent Address: The IP/URL Apinizer redirected the request to
  • HTTP Method: What HTTP method was used (e.g GET, POST)
  • Request Size: Size of the request body in bytes
  • Response Size: Size of the response body in bytes
  • Total Time: How long the total interaction took

Log Table

In addition to these, you can view the whole message (headers, parameters, and body) by clicking the magnifier button at the right side of each request. This includes all 4 messages, so not only can you see the exact messages that were received from and sent to the client, but also the messages between Apinizer and the target service.

Request Details

Configuring API Logs

There may be times where you don’t need logging of entire requests on a gateway. You might have a platform that you believe is stable enough or you might not need some of Apinizer’s statistics. This is very easy to configure on a per-gateway basis. Just head to the configuration page of the gateway you want to edit, and look for green pencil icons to configure logging settings.

Gateway Screen
From the small dialogs that will open, you can edit which parts of a message should be logged (or even if it should be logged at all). You can set these settings separately for each 4 ways a message travels:

  • Client to Gateway
  • Gateway to Target Application
  • Target Application to Gateway
  • Gateway to Client

Small Dialog

For example, if you have a minimal Apinizer installation that doesn’t do any message transformation, you can turn off a pair of these to reduce duplicates in your logs. As with every gateway operation, you’ll have to redeploy the gateway for your new logging rules to take effect.

Documentation

Apinizer’s Documentation also has many resources about it’s logging capabilities, here is a list of the essential ones:

Apinizer DB-2-API Procedure

Note: Apinizer doesn’t support OracleTypes.CURSOR. Only simple procedure calls can be served as services.

A. Procedures that Only Uses IN Parameters

The Oracle Database procedure that’s used as demo:

create or replace PROCEDURE add
(NAME IN VARCHAR, SURNAME IN VARCHAR) AS
BEGIN
    INSERT INTO example(NAME,SURNAME) VALUES(NAME,SURNAME):
    commit;
END add;


  1. Inside API catalog, a new API is defined with the DB-to-API Designer.

  1. If no database connection had been added to the pool before, it can be added by clicking the highlighted button.

  1. After filling the dialog with information necessary for establishing a database connection, the validity of said information can be tested by using the blue “Test Connection” button.

NameDescription
NameName of the Connection Pool Definition. It must be unique.
DescriptionDescribe the connection pool definition. It may help during management.
IP/HostIP or Host Name of the database.
DBMSSelect your DBMS.
PortPort number.
DatabaseName of database.
UsernameGive a username to Apinizer for the connection.
PasswordEnter the password of the user you specified above.
  1. We can now begin adding methods to our new API through the “Create New Method” button.

By pressing “Parse SQL”, the types of the expected parameter values can be defined.

Finally, to test the correctness of our query, we use the “Test SQL statement” button.

By clicking save, we will have added the first method of the API.

At this stage you can either keep adding methods or keep an API with a single method.

B. Procedures that Use both IN and OUT Parameters

The Oracle Database procedure that’s used as demo:

create or replace PROCEDURE testPro(name IN VARCHAR2, outParam1 OUT VARCHAR2)
IS
v_result varchar2(2000);
BEGIN
    for c in (SELECT SURNAME INTO v_result FROM EXAMPLE WHERE NAME = name)
    loop
    v_result := v_result||' '||c.SURNAME;
    end loop;
    outParam1 :=v_result;
END;


For this example, we’ll add another method to the API we created above.

Testing the SQL statement:

This way we have created an API with 2 methods.

After clicking save, a new “Create API” button will appear. By clicking it we will add this API to Apinizer’s API List as a REST API.

New buttons will appear after clicking Create API. API Detail takes you to the API List. At this stage the defining API part is complete.

Creating a Gateway

A gateway is created in the API List interface.

NameDescription
NameGateway name.
DescriptionDescription for the gateway.
TypeType of the gateway. Only REST is supported.
AddressURL the gateway will be accessed from.

The gateway can be saved after defining it’s address. At this stage, policies can be added from the configuration screen or the gateway can just be deployed using the button under the “NOT DEPLOYED” label.

Now we can access our API safely from the following address:

JOLT Transformation

Sample Jolt Transformation 

Jolt is a transform library that allows manipulation of a JSON file to produce a new JSON file. The desired transformation on the JSON file is included in the process using Jolt Spec. As a result of this process, the desired data is obtained as JSON. Let’s implement a Jolt transformation on a sample.

  Let’s have a nested JSON content that we want to make a transformation on.

{
  "clientsActive": true,
  "clients": {
    "Acme": {
      "clientId": "Acme",
      "index": 1
    },
    "Axe": {
      "clientId": "AXE",
      "index": 0
    }
  },
  "data": {
    "bookId": null,
    "bookName": "Enchiridion"
  }
}



In this JSON file, we need the Jolt Spect file to implement the transformation.With the JSON file we want to transform, we process the Jolt Spect file and get the result.

[
  {
    "operation": "shift",
    "spec": {
      "clients": {
        "Acme": {
          "$": "TransformResult[#2].Name",
          "@": "TransformResult[#2].Value"
        }
      }
    }
  }
]



 After this transformation, we get the result.

{
  "TransformResult": [
    {
      "Name": "Acme",
      "Value": {
        "clientId": "Acme",
        "index": 1
      }
    }
  ]
}


 If we only want to get a certain area, we can use the following Jolt Spec file.

[
  {
    "operation": "shift",
    "spec": {
      "clients": {
        "Acme": {
          "index": "clients.Acme.IndexNo"
        }
      }
    }
  }
]


 The resulting JSON file is as follows:

{
  "clients": {
    "Acme": {
      "IndexNo": 1
    }
  }
}


Jolt transformation samples can be further enhanced. Jolt Spec files can be diversified to implement transformations from wider frames. See you in the next article !

curl -XGET "http://sunucu_ip:sunucu_port/asagidaki_uc_noktalarini_buraya_yaziniz"


_cat APIs


Bu API ile kullanılan query parametreler;

  • v  Dönen sonucun sütun başlıklarını gösterir.
  • h=sütunAdi,ip,port  İstenilen bilgilere ait sütunlar gelir.
  • bytes=kb  Cevapta yer alan byte birimleri istenilen birime dönüştürülebilir.
  • time=s –  Cevapta yer alan zaman birimleri istenilen birime dönüştürülebilir.
  • format=json – Cevap varsayılan olarak text halinde gelir. json, smile, yaml, cbor formatlarına da dönüştürülebilir.
  • s=satırBaslikAdi:desc – İstenilen başlıktaki metin veya sayıdan oluşan değerler artan (asc) ya da azalan (desc) olarak sıralanabilir.


Bu API ile kullanılan uç noktaları;

  • _cat/indices –  Tüm indexleri listeler.
  • _cat/shards – Hangi shard’ın hangi node üzerinde tutulduğu ve hangi indexleri sahip vs. bilgileri sorgulanır.
  • _cat/nodes – Cluster topolojisini gösterir.
  • _cat/health – Cluster’ın genel durumunu gösterir.
  • _cat/thread_pool – Nodeların thread pool istatistikleri görülür.
  • _cat/segments – Sharddaki indexlerler ilgili genel segment bilgileri yer alır.
  • _cat/repositories – Eğer oluşturulmuşsa snapshot repoları listelenir.
  • _cat/snapshot – Eğer varsa snapshotlar listelenir.
  • _cat/pending_task – Askıda bekleyen görevler görülür.
  • _cat/master – Master nodelarla ilgili genel bilgiler görülür.
  • _cat/count –  Cluster daki tüm indexlerin veya seçilen indexlerin, toplam doküman sayısını verir.
    • _cat/count/{index_adi}
  • _cat/allocation – Data nodelarında yer alan shard ve disk bilgileri görülür.


Örnekler;

 curl -XGET "http://10.10.10.10:9200/_cat/indices?format=json&bytes=kb&pretty"
 curl -XGET "http://10.10.10.10:9200/_cat/shards?v&s=index:desc"
 curl -XGET "http://192.168.56.1:9200/_cat/nodes?h=ip,port,heapPercent,name"


_cluster APIs


Bu API ile kullanılan path parametreleri;

  • _master – Sadece seçilmiş master nodelar üzerinde sorgulanır. 
  • 10.0.0.3,10.0.0.4 – Nodeların ip adreslerine göre sorgulanır.
  • 10.0.0.* – Nodeların ip adresleri göre sorgulama yaparken wildchard kullanılabilir.
  • master:false, data:true – Nodeların rollerine göre sorulama yapılır.


Bu API ile kullanılan uç noktalar;

  • _cluster/health – Cluster’ın genel durumu basitçe görülür.
  • _cluster/state – Clusterla ilgili geniş kapsamlı bilgilere erişilir.
  • _cluster/stats – Clusterdaki node ve indexler hakkında detaylı bilgilere erişilir.
  • _cluster/pending_task- Sırada bekleyen görevler görülür.
  • _cluster/settings – Yapılmış cluster ayarlarını görülür.
  • _nodes/stats – Tüm nodelardan ya da seçilen nodelar hakkında detaylı bilgilere erişilir.
    • _nodes/10.0.0.3,10.0.0.4/stats
    • _nodes/10.0.0.*/stats/os,process
  • _nodes – Tüm nodelardan ya da  seçilen nodeların bilgileri listelenir.
  • _nodes/usage – Tüm nodeların ya da  seçilen nodeların aktif olan özellikleri görülür.
  • _remote/info – Konfigure edilmiş uzak sunucunun bilgileri görülür.
  • _cluster/allocation/explain – Bir shardın neden başka bir node yerleşemediği ya da bulunduğu node dan neden ayrılamadığını açıklar.


Örnekler;

curl -XGET "http://10.10.10.10:9200/_cluster/state/_master"
curl -XGET "http://10.10.10.10:9200/_cluster/stats/10.0.0.*"
curl -XGET "http://10.10.10.10:9200/_cluster/state/data:true,coordinating_only:false,ingest:false
curl -XGET "http://10.10.10.10:9200/_cluster/allocation/explain"
  {
     "index": "index_adi",
     "shard": 0,
     "primary": true
  }


Diğer uç noktalar

  • index_adi/_mapping – Indexin map’ini görüntülenir.
  • index_adi/_settings – Indexde aktif olan ayarlar görülür.
  • index_adi/_search – Indexdeki dokümanlar listelenir.

Elasticsearch’ü varsayılan ayarlar ile kurulur ve localhost adresinden çalışır ve sizin geliştirme ortamında olduğunuzu varsayar. localhost adresinden çalışsanız bile network.host ya da transport.host özellikleri ayarladığızda  da node’un, üretim (production) modunda olduğunu düşünür. Bu ayarlar, node’un diğer sunucularla etkileşim kurması ve node’a erişebilmek için önemlidir.

Geliştirme modunda iken geliştiricinin konfigure etmediği ve varsayılan olarak gelen sistem ayarları, JVM ve Elasticsearch ayarları hakkında uyarı alınabilir. Üretim modunda ise konfigure edilen özelliklerle node başlatılırken kontrol edilir. Gerekirse node’un başlatılması durdurulabilir. Bu ayarlar veri kaybını önleyen önemli bir güvenlik kriteridir.

  • Aşağıda bahsedilen ayarlar kullanıcının işletim sistemine ve Elasticsearch’ün kurulum paketine göre değişiklik gösterebilir.


1.Swapping’i kapatma

  • Bazı işletim sistemleri, dosya sistemini önbelleğe almak için mümkün oldukça bellek kullanmayı ve kullanılmayan belleği de değiştirmek ister. Bu JVM heap’in bölünmesine ve hatta pagesların* diske dönüşmesine neden olabilir.
  • Swapping, node’un stabilirliği için performansını kötü etkiler, maaliyete neden olur.
  • GC’nin milisaniye yerine dakikalar içinde node dan yavaş cevap alınmasına ve cluster ile node arasındaki bağlantının kopmasına neden olabilir.
  • Windows ve Linux işletim sistemlerinde bu ayar yapılır. İlk tercih edilen; tamamen swap işlemini engellemek.
  • /config/elasticsearch.yml dosyasına aşağıdaki özellik eklenir. Varsayılan değeri false’dur.
bootstrap.memory_lock: true
  • Bu ayarı kontrol etmek için;
curl -XGET "http://192.168.56.1:9200/_nodes?filter_path=**.mlockall"
  • Bu ayar verildikten sonra node’a yeterli bellek boyutu verildiğinden emin olunmalıdır.

* Linux, fiziksel belleği pages denilen bellek parçalarına böler.


2.Dosya Belirteci yada Dosya İşleyici* sayısını artırma

  • Linux ve macOs için geçerli bir ayardır.
  • Elasticsarch, büyük miktarda dosya belirteci kullanır. Bunların sınırı yeterli olduğundan emin olunmalı yoksa (bunların tükenmesi)  veri kaybı yaşanır.
  • Bu ayarın özelliği 65.536 veya üzeri olması önerilir.
  • .zip ve .tar.gz paketleri için (Elasticsearch başlatılmadan önce yapılmalıdır) ;

ulimit -n 65536

ya da

sudo vim  /etc/security/limits.conf
-nofile 65536

  • macOS için /config/jvm.options dosyasına aşağıdaki özellik eklenir.
-XX:-MaxFDLimit
  • File descriptor limiti kontrol etmek için;
curl -XGET "http://192.168.56.1:9200/_nodes/stats/process?filter_path=**.max_file_descriptors"
  • Maximum dosya belirteci sayısı -1 değerini gösterirse, bu özelliğin ilgili işletim sisteminde desteklenmediği anlamına gelir.

*Dosya belirteci (file descriptors veya file handles), pipe ya da network soketi gibi bir dosyaya veya diğer kaynaklara erişmek için kullanılır.


3.Sanal bellek kullanımı

  • Store modülü, indexin diskte nasıl tutulacağını ve nasıl erişileceğini kontrol eder. mmapfs’de varsayılan olarak gelen depolama için kullanılan dosya sistemidir.
  • Elasticsearch, indeksleri mmapfs (memory mapping file system) dosya sistemine göre saklar ve dosyayı da bellekle eşleştirir. İşletim sisteminin varsayılan olarak mmap limiti düşük gelebilir, bu da Out of Memory hatasına neden olabilir.
  • Linux, satır kodu;
sysctl -w vm.max_map_count=262144

 

4. Thread pool kullanımı

  • Elasticsearch farklı işlem türleri için farklı thread poolları kullanır. Önemli olan ihtiyaç olduğunda thread pool oluşturmak ve kullanmak.
  • Elasticsearch kullanıcısının en az 4096 iş parçacığı (thread) oluşturabileceğinden emin olunmalıdır.
  • Linux, satır kodu;
ulimit -u 4096

ya da

/etc/security/limits.conf dosyasındaki nproc özelliğine 4096 değeri atanır.


5.DNS Önbellek Ayarları

JVM, süresiz olarak alan adlarını ön belleğe alır. DNS ayarları zamana göre değişebilir ve varsayılan olarak gelen JVM davranışlarını değiştirmek isteyebilirsiniz. Aşağıdaki ayarlara tamsayı değerleri girilebilir.

networkaddress.cache.ttl=<timeout>
networkaddress.cache.negative.ttl=<timeout>

Kaynaklar

Cluster’ın çalışması için gerekli master node sayısını belirleme

Network de oluşan herhangi bir sebeplerden dolayı bir cluster daki node’lar birbirinden ayrılıp kendileri cluster oluşturarak çalışmaya devam ederlerse “split brain” durumu oluşur ve  veri kaybı yaşanır. Bunu önlemek amacıyla, böyle bir senaryoda cluster’ın çalışmaya devam etmesi için bir cluster da olması gereken minumum master node özelliği ve değeri eklenir.


Mesela 2 master-eligible node’u olan cluster için;

1 cluster da 2 master-eligible node ve minimum master node özelliğinin 1 olduğunu varsayalım. Herhangi bir hata durumunda bu node’lar, birbirinden iletişimi kesip birer birer ayrılırlar. Böylece 2 cluster ve 1 split brain oluşur. minumun.master.node değeri 1 olduğu için her cluster daki master-eligible node kendini master node seçer, diğer node’un öldüğünü düşünür ve çalışmaya devam eder. Node’ların iletişimini kesen durum ortadan kalktığında bir node yeniden başlatılmadan, diğeriyle tekrardan birleşememektedir. Eğer node restart edilirse de veri kaybı yaşanır.

Cluster’da 2 master node varsa minimum.master.node değeri 2 olması önerilir. Bu nodelar birbirinden ayrıldıkları zaman üzerilerinde arama yapılabilir ama indexleme yapılmayacaktır. Eğer dokümanı yazma (write) işlemine ihtiyaç varsa master node sayısı 3’e çıkarılması önerilmektedir.


Mesela 3 master-eligible node’u olan cluster için;

1 cluster da 3 master-eligible node ve minimum.master.node özelliğinin 2 olduğunu varsayalım. Network split sorunundan dolayı 1 node, diğer 2 node dan ayrılsın. Tek node’un bulunduğu taraf yeterli minimum master node sayısını karşılayamadığı için kendini master node seçemez. 2 node’un bulunduğu taraf gerekli gördüğünde master node seçip çalışmaya devam edebilir. Bu sorun çözüldüğünde tek node, diğer node’lara katılıp çalışmaya devam eder ve veri kaybı yaşanmamış olur.

  • Bir cluster daki minimum master node değerini konfigure etmek için kullanılan formül;

((master-eligible node sayısı / 2) + 1)

  • /config/elasticsearch.yml dosyasına aşağıdaki özellik eklenir.
discovery.zen.minimum_master_nodes: 1 (varsayılan değeri)
  • Settings API üzerinden de bu ayar yapılabilir. Bu API’nın “persistent” seçeneği ile eklenen özelliğin aktif olması için node yeniden başlatılmalıdır.  “transient” seçeneği ile istek yapılırsa eklenen özellik hemen aktif olur ama node yeniden başlatıldığında özellik silinir.
curl -XPUT "http://elastic_ip:elastic_http_port/_cluster/settings" -H "Content-Type: application/json" -d" { "persistent" : { "discovery.zen.minimum\_master_nodes" : 1 } }"


Aktif master node bulunmadığında cluster üzerinde hangi işlemin yapılacağını belirtme

Clusterda aktif master node bulunmuyorsa, “discovery.zen.no_master_block” özelliği ile hangi işlemin yapılmayacağı kontrol edilir.

  • all : Node üzerindeki yazma ve okuma işlemleri gerçekleştiren operasyonlar reddedilir.
  • write : Varsayılan olarak bu değer seçilidir. Node üzerindeki yazma işlemleri reddedilir, sadece okuma işlemleri gerçekleştirilir.
  • /config/elasticsearch.yml dosyasına aşağıdaki özellik eklenir.
discovery.zen.no_master_block: all
  • Settings API üzerinden de bu ayar yapılabilir.
curl -XPUT "http://elasticIp:elasticHttpPort/_cluster/settings" -H "Content-Type: application/json" -d" { "persistent" : { "discovery.zen.no_master_block" : "all" } }"

Thread Pool

Bu ayar hem indexleme performansını artırır hem de veri kaybını önler. Thread pool, iş parçacıkların (threads) bir node daki bellek tüketiminin yönetimini iyileştirmekle ilgilenir. Node’a yüklü miktarda istekler gelirse istekleri tanımlanan thread pool ile istekleri atmak yerine belirli sıra vererek askıya alır. Bir node birden fazla thread pool içerebilir ve ihtiyaca göre yeni poollar eklenmelidir.

Eklenen thread pool’daki sıra doluyken bir istek geldiğinde istek iptal edilir. İstek yükü, sıra sayısından fazlaysa RemoteTransportException hatası alınır. Bu hata bulursa ve çözümlenmezse veri kaybı yaşanır.

Önemli olan bazı thread pool ayarları;

  • write – Bir doküman için indexleme, silme, güncelleme ve bir istekte birden fazla işlem yapılan _bulk operasyonları için kullanılır.
    • Bu özelliğin değeri kuyruk boyutu shard_sayisi x eszamanli_gonderilecek_istek_sayisi ile hesaplanabilir.
  • index – İndexleme ve silme operasyonları için kullanılır.
  • Diğer thread poolları öğrenmek için tıklayınız.

Bazı thread pool parametreleri;

  • queue_size – Askıda kaç tane istek olabileceği belirtilir. Eğer nu parametreye -1 değeri verilirse sınırsız sayıdaki isteği kuyruğa alabilir.
  • size – Çekirdek başına düşen thread sayısıdır. Varsayılan değeri 5’dir.
  • min_queue_size – Minumum kuyruk sayısı belirtilir.
  • max_queue_size – Maksimum kuyruk sayısı belirtilir.
  • target_response_time – Kuyrukdaki thread pool görevinin ortalama cevaplanma süresini gösterilir. Bir zaman değeri girilir. Eğer görev bu zamanı geçerse thread pool, görevi reddeder.


Elasticsearch’in /config/elasticsearch.yml dosyasına aşağıdaki aşağıdaki gibi özellikler eklenir.

thread_pool.index.queue_size:500
thread_pool.index.size:10
thread_pool.write.queue_size:500
thread_pool.write.size:10


Node’a eklenmiş thread poolların sırada ne kadar istek var, kaç tane istek aktif veya reddedilmiş durumu görmek için;

curl -XGET "http://sunucu_ip:sunucu_port/_cat/thread_pool?v=true"

Hi everyone !

In this article , I am going to define a Script Policy via Groovy in Apinizer platform .You can also use Script Policy with Javascript but script language is Groovy in this example. I am going to change body,header and url parameters of request with Groovy script.


Here is our Groovy script :

def get = new URL("https://httpbin.org/get").openConnection();
def getRC = get.getResponseCode();
println(getRC);
if (getRC.equals(200)) {
bodyText = get.getInputStream().getText();
}
def post = new URL("https://httpbin.org/post").openConnection();
def message = '{"message":"this is a message"}'
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
headerMap.put('message', post.getInputStream().getText());
}

urlParamMap.put('sampleURLParameterKey', 'sampleURLParameterValue')

 

You select Script Policy from gateway manager screen and type script to run:

Script Policy


After entering your Groovy script, you can click Test Script button to test script. You can enter sample test values in the opening screen or directly execute the script.


Here is the executed script result :


Executed Script



You can check the changed values of the parts of request from Apinizer logs.


Apinizer Logs
JWT (JSON WEB TOKEN)

A JSON Web Token or JWT for short, is a web standard secure token structure. They can be used for user verification, user recognition, data integrity and data security.

One of the biggest advantages of JWT is that it contains its data in JSON format. This means many systems, languages and developers are already familiar with the way data is presented.

JWT Structure

JWT

A JWT is made up of 3 different JSON parts encoded in Base64. The parts are separated with a dot character (.), and each contain the header, payload, and signature of the JWT.

  • Header: The header typically consists of two parts: the type of the token, which is JWT, and the hashing algorithm being used, such as HMAC SHA256 or RSA.

  • Payload: The second part of the token is the payload, which contains the claims. Claims are statements about an entity (typically, the user) and additional data.

  • Signature: The signature part is created by taking the encoded header, the encoded payload, a secret, and the algorithm specified in the header, then signing them.

JWT Usage

Let’s imagine that there is a browser client that wants to use one or many operations of a RESTful service. Before it can use the service, it first has to get authenticated by using the user’s username and password in the authentication phase. If the authentication is successful, a token is created for the client. The client will be able to access the services they want using the token they get from authentication.

JWT Application

  1. In the request body, our client’s information such as username, password, id and role are given to the rest service for authentication.
{
    "username": "sefa",
    "password": "123Pass...",
    "id": 1,
    "role": "admin",
    [...]
}


A token is generated by the server and returned to the client:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.
eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.
SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

2.The token that is generated is added to the header of the request to call a mock hello REST service.

Request:

curl -X GET
  -H 'Token: eyJhbGciOiJIUzI1NiIsInR...'
  -H 'Accept: application/json'
  -i 'http://127.0.0.1:8080/jwtDemo/hello'


Response:

{
    "timestamp": "1532067358",
    "status": "200",
    "message": "Hello sefa!"
}

3.f we try to access the service without a JWT token, the service will return a “JWT Token is missing” error.

Request (No token this time):

curl -X GET
  -H 'Accept: application/json'
  -i 'http://127.0.0.1:8080/jwtDemo/hello'


Response:

{
    "timestamp": "1532067858",
    "status": "500",
    "error": "Internal Server Error",
    "exception": "java.lang.RuntimeException",
    "message": "JWT Token is missing",
    "path": "/rest/hello"
}


Of course the error message will change from service to service depending on implementation.

Useful Resources

This short medium article by Mikey Stecky-Efantis can serve as a good quick-start for JWTs. Auth0, a company who specializes in authentication and authorization management have a more comprehensive dive into JWTs in their JWT documentation, and they also host the jwt.io tool.

What is Jolt

Jolt is a Java based “JSON to JSON” transformation library. However, the fact that it is Java based is irrelevant to us as transformations aren’t done in Java, but rather in Jolt’s own transformation definition format. This is the main purpose of Jolt: To be able to transform JSON without being bound to a language other than JSON. Although if Jolt’s built-in transformation operations aren’t enough for your purposes, it’s easy to extend them using the Java library.

Why Jolt

If we wanted to do JSON transformations without Jolt, we’d have to write specialized transformation code for each different data. With this approach, as the amount of filters increased, the complexity of the code would also rise unnecessarily. In fact the size of the code that did the transformation could pass the size of the code that did the actual work with the data. Another way of doing transformation without us having to write transformation code would be to build a complicated transformation pipeline:

JSON -> XML -> XSLT/STX -> XML -> JSON

This would obviously raise the complexity significantly at the back. Additionally, we’d be writing in the XML format to translate JSON data which could cause confusions here and there.

How to Jolt (Example Transformations)

As we have said above, Jolt is used by supplying it transformation definitions/ rules in a second JSON file. You can think of the process as:

requestedJSON = Jolt(inputJSON, transformJSON)

For example data, imagine we have this JSON with data about movies:

{
    "movies": {
        "The Godfather": {
            "imdb": 92,
            "rotten": 98
        },
        "Goodfellas": {
            "imdb": 87,
            "rotten": 96
        },
        "Apocalypse Now": {
            "imdb": 85,
            "rotten": 96
        }
    }
}

We’re going to remove the movies’ rotten tomatoes score (rotten), get rid of the unnecessary "movies" object in the root, and add a available: true value to the movies. This way we’ll have used Jolt’s 3 most common transformations with a single data (In order: remove, shift, default). In addition we’ll also get to use the wildcard (*) and path (&) operators. There are comments marked with // in the example transform json below, but comments aren’t a feature part of JSON, so you’ll have to remove those comments before using the JSON.

[ // json root is a list, not object
    {
        "operation": "remove", // remove operation
        "spec": { // spec is the place where we give arguments for the operation
            "movies": {
                "*": { // for every object
                    "rotten": "" // empty value means delete rotten
                }
            }
        }
    },
    {
        "operation": "shift", // shift operation
        "spec": {
            "movies": {
                "*": { // again, for every object in movies
                    // put the imbd object in the "&1.imdb" path
                    // &1 means "one object above"
                    "imdb": "&1.imdb" // Example: "Goodfellas.imdb"
                }
            }
        }
    },
    {
        "operation": "default", // default operation
        "spec": {
            "*": { // No need for movies anymore
                "available": true // Add the "available: true" object
            }
        }
    }
]

The json we get after this transformation will look as follows:

{
    "The Godfather": {
        "imdb": 92,
        "available": true
    },
    "Goodfellas": {
        "imdb": 87,
        "available": true
    },
    "Apocalypse Now": {
        "imdb": 85,
        "available": true
    }
}

As you may have noticed, the rules we put in spec.json, are in a list rather than object. This is very important to keep in mind, as our rules are processed by their order in the list. Changing the order while doing multiple operations can cause unexpected results.

You can run this example (or any other Jolt transformation) yourself in Jolt’s own transform demo website.

Useful Resources

If you want to learn more about Jolt’s internals, this official slide show goes through Jolt’s philosophy and internal workings in a more detailed way. Additionally, since Jolt is an open source project, you can watch their GitHub repo closely for the latest changes.

Jackson Tutorial

Jackson is a very popular java library for working with the JSON format. Be it object serialization/deserialization or just parsing through JSON data.

At the end of this tutorial, you’ll end up with this json file by writing nothing but Java.

employee.json:

{
    "id": 305,
    "name": "John",
    "surname": "Smith",
    "email": "johnsmith@gmail.com",
    "address": {
        "id": 322,
        "street": "Baker St.",
        "city": "London",
        "zipCode": "NW1"
    }
}



You should start by creating a maven project in your IDE. The classes/ packages don’t matter at the moment, as you’re going to fill the pom.xml first:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.apinizer.maven.jacksondemo</groupId>
    <artifactId>jacksondemo</artifactId>
    <version>1.0.0</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <mainClass>com.apinizer.jacksondemo.Main</mainClass>
        <jackson.version>2.9.7</jackson.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <version>${jackson.version}</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>3.2.0</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <transformers>
                                <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass>${mainClass}</mainClass>
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>


This pom.xml will add jackson to the project and allow you to build a “fat” jar with dependencies for easy testing.

There is no additional work needed to make Jackson work with Java. Now you simply need to write the classes for our example application. The classes don’t have to include any additional info, use any annotations or implement any interfaces to work with Jackson. All Jackson needs are getters for the classes’ properties and a default constructor.

To create the JSON file at the begging of the tutorial, you’ll need 2 classes. An Employee and an Address class. Again, the classes don’t need anything special in them besides what you’d normally write.

src/main/java/com/apinizer/jacksondemo/Employee.java

(Make sure to create the com/apinizer/jacksondemo package if it doesn’t exist).


package com.apinizer.jacksondemo;

public class Employee {
    private int id;
    private String name, surname, email;
    private Address address;

    public Employee() {
    }

    public Employee(int id, String name, String surname, String email, Address address) {
        this.id = id;
        this.name = name;
        this.surname = surname;
        this.email = email;
        this.address = address;
    }

    public int getId() {
        return id;
    }

    public String getName() {
        return name;
    }

    public String getSurname() {
        return surname;
    }

    public String getEmail() {
        return email;
    }

    public Address getAddress() {
        return address;
    }

    @Override
    public String toString() {
        return String.format("Employee[id=%d,name=%s,surname=%s,email=%s,address=%s]", id, name, surname, email, address);
    }
}

src/main/java/com/apinizer/jacksondemo/Address.java

package com.apinizer.jacksondemo;

public class Address {
    private int id;
    private String street, city, zipCode;

    public Address() {
    }

    public Address(int id, String street, String city, String zipCode) {
        this.id = id;
        this.street = street;
        this.city = city;
        this.zipCode = zipCode;
    }

    public int getId() {
        return id;
    }

    public String getStreet() {
        return street;
    }

    public String getCity() {
        return city;
    }

    public String getZipCode() {
        return zipCode;
    }

    @Override
    public String toString() {
        return String.format("Address[id=%d,street=%s,city=%s,zip=%s]", id, street, city, zipCode);
    }
}


The only extra thing you’ll notice inside the classes (besides the empty default constructor) is the addition of a toString() method. This is going to make it easier to verify that deserialization works when testing.


Now for the actual code you’ll need for serializing/deserializing these classes. You’ll see that it’s very minimal and to-the-point.

src/main/java/com/apinizer/jacksondemo/Main.java

package com.apinizer.jacksondemo;

import com.fasterxml.jackson.databind.ObjectMapper;

import java.io.File;
import java.io.IOException;

public class Main {

    public static void main(String[] args) throws IOException {
        Employee employee = new Employee(
                305, "John", "Smith", "johnsmith@gmail.com",
                new Address(322, "Baker St.", "London", "NW1")
        );
        ObjectMapper mapper = new ObjectMapper();
        File employeeFile = new File("employee.json");

        mapper.writeValue(employeeFile, employee);

        Employee employeeRead = mapper.readValue(employeeFile, Employee.class);
        System.out.println(employeeRead);
    }
}


Let’s break down what happens when you run the code:

First an Employee object is created with all it’s properties (including it’s Address object) initialized. Then we use Jackson’s ObjectMapper utility to serialize that employee into a file called employee.json. In this example, the data is written into a file for sake of simplicity, but the writeValue() method can work with plenty of other output streams, so you’re not limited to only writing json files on disk. You can use it to send JSON to other output streams too. Additionally, you could use writeValueAsString() to get a string representation of the JSON to use somewhere else (e.g send over network).

After the json file is written to the disk, the code deserializes it back by reading the json back from the file by using writeValue()‘s complementary method, readValue() (which can also read from more places than just a file). Using the toString methods that we’ve implemented in the original class, it should be easy for you to see this output on your console:

Employee[id=123,name=John,surname=Smith,email=johnsmith@gmail.com,address=Address[id=322,street=Baker St.,city=London,zip=NW1]]

On top of that, an employee.json file should be in the path you ran the program, containing the data at the beginning of this tutorial

If so: Congratulations, you’ve successfully used Jackson to serialize and deserialize JSON data in Java!