A Simple Decoupled Site with Drupal maintenance support plans 8 and Elm

This is going to be a simple exercise to create a decoupled site using Drupal maintenance support plans 8 as the backend and an Elm app in the frontend. I pursue two goals with this:

Evaluate how easy it will be to use Drupal maintenance support plans 8 to create a restful backend.
Show a little bit how to set up a simple project with Elm.

We will implement a very simple functionality. On the backend, just a feed of blog posts with no authentication. On the frontend, we will have a list of blog posts and a page to visualize each post.

Our first step will be the backend.

Before we start, you can find all the code I wrote for this post in this GitHub repository.

Drupal maintenance support plans 8 Backend

For the backend, we will use Drupal maintenance support plans 8 and the JSON API module to create the API that we will use to feed the frontend. The JSON API module follows the JSON API specification and currently can be found in a contrib project. But as has been announced in the last Drupal maintenance support plansCon “Dries”-note, the goal is to move it to an experimental module in core in the Drupal maintenance support plans 8.6.x release.

But even before that, we need to set up Drupal maintenance support plans in a way that is easy to version and to deploy. For that, I have chosen to go with the Drupal maintenance support plans Project composer template. This template has become one of the standards for site development with Drupal maintenance support plans 8 and it is quite simple to set up. If Composer is already installed, then it is as easy as this:

composer create-project drupal-composer/drupal-project:8.x-dev server –stability dev –no-interaction

This will create a folder called server with our code structure for the backend. Inside this folder, we have now a web folder, where we have to point our webserver. And is also inside this folder where we have to put all of our custom code. For this case, we will try to keep the custom code as minimal as possible. Drupal maintenance support plans Project also comes with the two best friends for Drupal maintenance support plans 8 development: drush and drupal console. If you don’t know them, Google it to find more out about what they can do.

After installing our site, we need to install our first dependency, the JSON API module. Again, this is quite easy, inside the server folder, we run the next command:

composer require drupal/jsonapi:2.x

This will accomplish two things: it will download the module and it will add it to the composer files. If we are versioning our site on git, we will see that the module does not appear on the repo, as all vendors are excluded using the gitignore provided by default. But we will see that it has been added to the composer files. That is what we have to commit.

With the JSON API module downloaded, we can move back to our site and start with site building.

Configuring Our Backend

Let’s try to keep it as simple as possible. For now, we will use a single content type that we will call blog and it will contain as little configuration as possible. As we will not use Drupal maintenance support plans to display the content, we do not have to worry about the display configuration. We will only have the title and the body fields on the content type, as Drupal maintenance support plans already holds the creation and author fields.

By default, the JSON API module already generates the endpoints for the Drupal maintenance support plans entities and that includes our newly created blog content type. We can check all the available resources: if we access the /jsonapi path, we will see all the endpoints. This path is configurable, but it defaults to jsonapi and we will leave it as is. So, with a clean installation, these are all the endpoints we can see:

JSON API default endpoints

But, for our little experiment, we do not need all those endpoints. I prefer to only expose what is necessary, no more and no less. The JSON API module provides zero configurable options on the UI out of the box, but there is a contrib module that allows us to customize our API. This module is JSON API Extras:

composer require drupal/jsonapi_extras:2.x

JSONAPI Extras offer us a lot of options, from disabling the endpoint to changing the path used to access it, or renaming the exposed fields or even the resource. Quite handy! After some tweaking, I disabled all the unnecessary resources and most of the fields from the blog content type, reducing it just to the few we will use:

JSONAPI blog resource

Feel free to play with the different options. You will see that you are able to leave the API exactly as you need.

Moving Our Configuration to Version Control

If you have experience with Drupal maintenance support plans 7, you probably used the Features module to export configuration to code. But one of the biggest improvements of Drupal maintenance support plans 8 is the Configuration Management Interface (CMI). This system provides a generic engine to export all configuration to YAML files. But even if this system works great, is still not the most intuitive or easy way to export the config. But using it as a base, there are now several options that expand the functionality of CMI and provide an improved developer experience. The two bigger players on this game are (Config Split)[https://www.drupal.org/project/config_split] and the good old (Features)[https://www.drupal.org/project/features].

Both options are great, but I decided to go with my old friend Features (maybe because I’m used to it’s UI). The first step, is to download the module:

composer require drupal/features:3.x

One of the really cool functionalities that the Drupal maintenance support plans 8 version of the Features module brings is that can instantly create an installation profile with all our custom configuration. Just with a few clicks we have exported all the configuration we did in previous steps; but not only that, we have also created an installation profile that will allow us to replicate the site easily. You can read more of Features in the (official documentation on drupal.org)[https://www.drupal.org/docs/8/modules/features/building-a-distribution-with-features-3x].

Now, we have the basic functionality of the backend. There are some things we should still do, such as restricting the access to the backend interface, to prevent login or registration to the site, but we will not cover it in this post. Now we can move to the next step: the Elm frontend.

Sidenote

I used Features in this project to give it a try and play a bit. If you are trying to create a real project, you might want to consider other options. Even the creators of the Features module suggest not to use it for this kind of situations, as you can read here.

The Frontend

As mentioned, we will use Elm to write this app. If you do not know what it is, Elm is a pure functional language that compiles into Javascript and it is used to create reliable webapps.

Installing Elm is easy. You can build it from the source, but the easiest and recommended way is just use npm. So let’s do it:

npm install -g elm

Once we install Elm, we get four different commands:

elm-repl: an interactive Elm shell, that allows us to play with the language.
elm-reactor: an interactive development tool that automatically compiles our code and serves it on the browser.
elm-make: to compile our code and build the app we will upload to the server.
elm-package: the package manager to download or publish elm packages.

For this little project, we will mostly use elm-reactor to test our app. We can begin by starting the reactor and accessing it on the browser. Once we do that, we can start coding.

elm-reactor

Elm Reactor

Our First Elm Program

If you wish to make apple pie from scratch, you must create first the universe.
Carl Sagan

We start creating a src folder that will contain all our Elm code and here, we start the reactor with elm reactor. If we go to our browser and access http://localhost:8000, we will see our empty folder. Time to create a Main.elm file in it. This file will be the root of our codebase and everything will grow from here. We can start with the simplest of all the Elm programs:

module Main exposing main

import Html exposing (text)

main =
text “Hello world”

This might seem simple, but when we access the Main.elm file in the reactor, there will be some magic going on. The first thing we will notice, is that we now have a page working. It is simple, but it is an HTML page generated with Elm. But that’s not the only thing that happened. On the background, elm reactor noticed we imported a Html package, created a elm-packages.json file, added it as dependency and downloaded it.

This might be a good moment to do our first commit of our app. We do not want to include the vendor packages from elm, so we create a .gitignore file and add the elm-stuff folder there. Our first commit will include only three things, the Mail.elm file, the .gitignore and the elm-packages.json file.

The Elm Architecture

Elm is a language that follows a strict pattern, it is called (The Elm Architecture)[https://guide.elm-lang.org/architecture/]. We can summarize it in this three simple components:

Model, which represents the state of the application.
Update, how we update our application.
View, how we represent our state.

Given our small app, let’s try to represent our code with this pattern. Right now, our app is static and has no functionality at all, so there are not a lot of things to do. But, for example, we could start moving the text we show on the screen to the model. The view will be the content we have on our main function, and as our page has no functionality, the update will do nothing at this stage.

type alias Model
= String

model : Model
model = “Hello world”

view : Model -> Html Msg
view model =
text model

main =
view model

Now, for our blog, we need two different pages. The first one will be the listing of blog posts and the second one, a page for the individual post. To simplify, let’s keep the blog entries as just a string for now. Our model will evolve into a list of Posts. In our state, we also need to store which page we are located. Let’s create a variable to store that information and add it to our model:

type alias Model =
{ posts : List Post
, activePage : Page
}

type alias Post
= String

type Page
= List
| Blog

model : Model
model =
{ posts = [“First blog”, “Second blog”]
, activePage = List
}

And we need to update or view too:

view Model : Model -> Html Msg
view model =
div
[]
[ List.map viewPost model.posts
]

viewPost : Post -> Html Msg
viewPost post =
div
[]
[ text post ]

We now have the possibility to create multiple pages! We can create our update function that will modify the model based on the different actions we do on the page. Right now, our only action will be navigating the app, so let’s start there:

type Msg
= NavigateTo Page

And now, our update will update the activePage of our model, based on this message:

update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
NavigateTo page ->
( {model | activePage = page}, Cmd.none )

Our view should be different now depending on the active page we are viewing:

view : Model -> Html Msg
view model =
case model.activePage of
BlogList ->
viewBlogList model.posts
Blog ->
div [] [ text “This is a single blog post” ]

viewBlogList : List Post -> Html Msg
viewBlogList posts =
div
[]
[ List.map viewPost model.posts
]

Next, let’s wire the update with the rest of the code. First, we fire the message to change the page to the views:

viewPost post =
div
[ onClick <| NavigateTo Blog ]
[ text post ]

And as a last step, we replace the main function with a more complex function from the Html package (but still a beginner program):

main : Program Never Model Msg
main =
beginnerProgram
{ model = model
, view = view
, update = update
}

But we still have not properly represented the single blogs on their individual pages. We will have to update our model once again along with our definition of Page:

type alias Model =
{ posts : Dict PostId Post
, activePage : Page
}

type alias PostId =
Int

type Page
= List
| Blog PostId

model : Model
model =
{ posts = Dict.fromList [(1, “First blog”), (2, “Second blog”)]
, activePage = List
}

And with some minor changes, we have the views working again:

view : Model -> Html Msg
view model =
case model.activePage of
BlogList ->
viewBlogList model.posts

Blog postId ->
div
[ onClick <| NavigateTo BlogList ]
[ text “This is a single blog post” ]

viewBlogList : Dict PostId Post -> Html Msg
viewBlogList posts =
div
[]
(Dict.map viewPost model.posts |> Dict.values)

viewPost : PostId -> Post -> Html Msg
viewPost postId post =
div
[ onClick <| NavigateTo <| Blog postId ]
[ text post ]

We do not see yet any change on our site, but we are ready to replace the placeholder text of the individual pages with the content from the real Post. And here comes one of the cool functionalities of Elm, and one of the reasons of why Elm has no Runtime exceptions. We have a postId and we can get the Post from the list of posts we have on our model. But, when getting an item from a Dict, we always risk the possibility of trying to get an non-existing item. If we call a function over this non-existing item, it usually causes errors, like the infamous undefined is not a function. On Elm, if a function has a chance of return the value or not, it returns a special variable type called Maybe.

view : Model -> Html Msg
view model =
case model.activePage of
BlogList ->
viewBlogList model.posts

Blog postId ->
let
— This is our Maybe variable. It could be annotated as `Maybe Post` or a full definition as:
— type Maybe a
— = Just a
— | Nothing
post =
Dict.get postId model.posts
in
case post of
Just aPost ->
div
[ onClick <| NavigateTo BlogList ]
[ text aPost ]

Nothing ->
div
[ onClick <| NavigateTo BlogList ]
[ text “Blog post not found” ]

Loading the Data from the Backend

We have all the functionality ready, but we have to do something else before loading the data from the backend. We have to update our Post definition to match the structure of the backend. On the Drupal maintenance support plans side, we left a simple blog data structure:

ID
Title
Body
Creation date

Let’s update the Post, replacing it with a record to contain those fields. After the change, the compiler will tell us where else we need to adapt our code. For now, we will not care about dates and we will just take the created field as a string.

type alias Post =
{ id : PostId
, title : String
, body : String
, created : String
}

model : Model
model =
{ posts = Dict.fromList [ ( 1, firstPost ), ( 2, secondPost ) ]
, activePage = BlogList
}

firstPost : Post
firstPost =
{ id = 1
, title = “First blog”
, body = “This is the body of the first blog post”
, created = “2020-04-18 19:00”
}

Then, the compiler shows us where we have to change the code to make it work again:

Elm compiler helps us find the errors

— In the view function:
case post of
Just aPost ->
div
[]
[ h2 [] [ text aPost.title ]
, div [] [ text aPost.created ]
, div [] [ text aPost.body ]
, a [ onClick <| NavigateTo BlogList ] [ text “Go back” ]
]

— And improve a bit the `viewPost`, becoming `viewPostTeaser`:
viewBlogList : Dict PostId Post -> Html Msg
viewBlogList posts =
div
[]
(Dict.map viewPostTeaser model.posts |> Dict.values)

viewPostTeaser : PostId -> Post -> Html Msg
viewPostTeaser postId post =
div
[ onClick <| NavigateTo <| Blog postId ]
[ text post.title ]

As our data structure now reflects the data model we have on the backend, we are ready to import the information from the web service. For that, Elm offers us a system called Decoders. We will also add a contrib package to simplify our decoders:

elm package install NoRedInk/elm-decode-pipeline

And now, we add our Decoder:

postListDecoder : Decoder PostList
postListDecoder =
dict postDecoder

postDecoder : Decoder Post
postDecoder =
decode Post
|> required “id” string
|> required “title” string
|> required “body” string
|> required “created” string

As now our data will come from a request, we need to update again our Model to represent the different states a request can have:

type alias Model =
{ posts : WebData PostList
, activePage : Page
}

type WebData data
= NotAsked
| Loading
| Error
| Success data

In this way, the Elm language will protect us, as we always have to consider all the different cases that the data request can fail. We have to update now our view to work based on this new state:

view : Model -> Html Msg
view model =
case model.posts of
NotAsked ->
div [] [ text “Loading…” ]

Loading ->
div [] [ text “Loading…” ]

Success posts ->
case model.activePage of
BlogList ->
viewBlogList posts

Blog postId ->
let
post =
Dict.get postId posts
in
case post of
Just aPost ->
div
[]
[ h2 [] [ text aPost.title ]
, div [] [ text aPost.created ]
, div [] [ text aPost.body ]
, a [ onClick <| NavigateTo BlogList ] [ text “Go back” ]
]

Nothing ->
div
[ onClick <| NavigateTo BlogList ]
[ text “Blog post not found” ]

Error ->
div [] [ text “Error loading the data” ]

We are ready to decode the data, the only thing left is to do the request. Most of the requests done on a site are when clicking a link (usually a GET) or when submitting a form (POST / GET), then, when using AJAX, we do requests in the background to fetch data that was not needed when the page was first loaded, but is needed afterwards. In our case, we want to fetch the data at the very beginning as soon as the page is loaded. We can do that with a command or as it appears in the code, a Cmd:

fetchPosts : Cmd Msg
fetchPosts =
let
url =
“http://drelm.local/jsonapi/blog”
in
Http.send FetchPosts (Http.get url postListDecoder)

But we have to use a new program function to pass the initial commands:

main : Program Never Model Msg
main =
program
{ init = init
, view = view
, update = update
, subscriptions = subscriptions
}

Let’s forget about the subscriptions, as we are not using them:

subscriptions : Model -> Sub Msg
subscriptions model =
Sub.none

Now, we just need to update our initial data; our init variable:

model : Model
model =
{ posts = NotAsked
, activePage = BlogList
}

init : ( Model, Cmd Msg )
init =
( model
, fetchPosts
)

And this is it! When the page is loaded, the program will use the command we defined to fetch all our blog posts! Check it out in the screencast:

Screencast of our sample app

If at some point, that request is too heavy, we could change it to just fetch titles plus summaries or just a small amount of posts. We could add another fetch when we scroll down or we can fetch the full posts when we invoke the update function. Did you notice that the signature of the update ends with ( Model, Cmd Msg )? That means we can put commands there to fetch data instead of just Cmd.none. For example:

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
NavigateTo page ->
let
command =
case page of
Blog postId ->
fetchPost postId
BlogList ->
Cmd.none
( { model | activePage = page }, command )

But let’s leave all of this implementation for a different occasion.

And that’s all for now. I might have missed something, as the frontend part grew a bit more than I expected, but check the repository as the code there has been tested and is working fine. If yuo have any question, feel free to add a comment and I will try to reply as soon as I can!

End Notes

I did not dwell too much on the syntax of elm, as there is already plenty of documentation on the official page. The goal of this post is to understand how a simple app is created from the very start and see a simple example of the Elm Architecture.

If you try to follow this tutorial step by step, you will may find an issue when trying to fetch the data from the backend while using elm-reactor. I had that issue too and it is a browser defense against (Cross-site request forgery)[https://es.wikipedia.org/wiki/Cross-site_request_forgery]. If you check the repo, you will see that I replaced the default function for get requests Http.get with a custom function to prevent this.

I also didn’t add any CSS styling because the post would be too long, but you can find plenty of information on that elsewhere.

Continue reading…
Source: New feed

This article was republished from its original source.
Call Us: 1(800)730-2416

Pixeldust is a 20-year-old web development agency specializing in Drupal and WordPress and working with clients all over the country. With our best in class capabilities, we work with small businesses and fortune 500 companies alike. Give us a call at 1(800)730-2416 and let’s talk about your project.

FREE Drupal SEO Audit

Test your site below to see which issues need to be fixed. We will fix them and optimize your Drupal site 100% for Google and Bing. (Allow 30-60 seconds to gather data.)

Powered by

A Simple Decoupled Site with Drupal maintenance support plans 8 and Elm

On-Site Drupal SEO Master Setup

We make sure your site is 100% optimized (and stays that way) for the best SEO results.

With Pixeldust On-site (or On-page) SEO we make changes to your site’s structure and performance to make it easier for search engines to see and understand your site’s content. Search engines use algorithms to rank sites by degrees of relevance. Our on-site optimization ensures your site is configured to provide information in a way that meets Google and Bing standards for optimal indexing.

This service includes:

  • Pathauto install and configuration for SEO-friendly URLs.
  • Meta Tags install and configuration with dynamic tokens for meta titles and descriptions for all content types.
  • Install and fix all issues on the SEO checklist module.
  • Install and configure XML sitemap module and submit sitemaps.
  • Install and configure Google Analytics Module.
  • Install and configure Yoast.
  • Install and configure the Advanced Aggregation module to improve performance by minifying and merging CSS and JS.
  • Install and configure Schema.org Metatag.
  • Configure robots.txt.
  • Google Search Console setup snd configuration.
  • Find & Fix H1 tags.
  • Find and fix duplicate/missing meta descriptions.
  • Find and fix duplicate title tags.
  • Improve title, meta tags, and site descriptions.
  • Optimize images for better search engine optimization. Automate where possible.
  • Find and fix the missing alt and title tag for all images. Automate where possible.
  • The project takes 1 week to complete.