Wednesday, August 31, 2016

#CSS #HTML #JavaScript Collection of Code convert tools and utilities. #HTML #CSS #JavaScript … https://t.co/H8gqjRWa3v


from Twitter https://twitter.com/devwebbest

What is Zapier? How to Automate Your Business Tasks Better

20 Best WordPress Directory Themes: For Business Listing Sites and More

The Portable Guitarist—Hardware

The Portable Guitarist—Hardware

The my first tutorial, The Portable Guitarist—Using iOS as a Live Rig, I explained why an iOS-based live rig has many advantages. In this tutorial, I'll explain the core of the set-up: the hardware.

Before you start, you should consider the reason that you'll be using the device.

Consider Why You'll Be Using an iOS Device

Guitartists often use time-based apps like delays and reverbs. If, however, the device has insufficient memory, the sound will glitch and stutter terribly. Before choosing a device to use, bear this in mind: the more processing power and memory you can get, the better. 

Therefore, when looking at the specifications of a preferred device (which you can find on the relevant product page of the Apple website), pay attention to:

  • Which generation of processor is onboard, the more recent, the better
  • How much memory the device has: be aware that when a 16GB memory is quoted, the reality is nearer 12GB due to system requirements

Don’t spend everything on the device. It may be the centre of your sound set-up, but there’ll be peripherals to buy, so budget accordingly.

For further savings, check out the Refurbished section of the Apple Store. Not only are items cheaper, but they’ll be as-new, and come with a 1-year Apple warranty.

An iPad Is the Sensible Choice

Personally, I favour the iPad. If nothing else, a bigger screen is easier to see on a darkened stage (even if iPhones seem to get bigger with each iteration). Plus, on-screen adjustments are easier; no-one wants to hear you accidentally engaging your death metal tone during a sensitive ballad.

Then there’s battery life; having used both an iPad and iPhone live, the battery life of an iPhone can plummet alarmingly. True, if it’s fully charged to start with, you’re unlikely to play long enough to drain it down, and a break between sets lets you recharge it. However, I’d rather focus on my performance than the battery icon.

Remember to check the Apple store for refurbished iPads.

Don't Dismiss the iPhone or iPod Touch

Screen size and battery life aside, the iPhone and iPod Touch suffer little in specification; currently, the iPhone 6, 6 Plus, and iPod Touch all share the same processor as the iPad mini 4, and are all available up to 64GB. The iPhone 6S and 6S Plus offer improved processor and a maximum of 128GB memory.

It can also be cheaper; Apple currently charges £159 for the 16GB iPod Touch. If you can afford it, the top-of-the-range 128GB is £329; compare this to the iPhone 6S at £539, and the iPad Air 2 at £559.

So they can be cheaper, and they’re certainly even more portable. They generally run the same apps as the iPad, certainly the guitar-centric ones, and accept the same interfaces.

Interfaces

There are generally two types of interface: 

  1. headphone socket, and 
  2. dock 

Headphone Socket

These were the original iOS interface, and the iRig by IK Multimedia is probably the most famous. Beautifully straightforward, and attractively cheap, it was a trailblazer. 

In 2015 it was superseded by the iRig2, which is a little bigger, and has more features such as input gain control. Crucially, it’s still cheap; the iRig2 costs under £30. The original iRig is still available, and I’ve found it for as little as £15 new.

However, this wonderful cheapness comes at a price: noise. Lots and lots of noise.

The headphone socket is supposed to be for output, so introducing an input signal in such close proximity creates hiss, and can lead to feedback.

The Ampkit Link from Peavey tried to address this by being battery-powered, and it certainly meant a higher signal level could be achieved before feedback occurred. But it didn’t cure it, so guitarists relied heavily on noise gates

A better solution is that of a dock interface.

Dock Interfaces

Unlike the headphone socket, the dock doesn’t rely on just three connections, so input and output signals can be kept separate, significantly cleaning up the sound. It also frees the headphone socket to do what it was designed to do.

Of the models on the market, and ignoring those of a desktop nature, my choice is the Jam from Apogee. It’s extremely compact, and requires no external power source. 

Other features include the integral gain control which means it can also be used with certain microphones. It is compatible with both iOS and Mac for live performance and home recording. It also still serves the 30-pin format of older devices, alongside that of current, Lightning-equipped ones.

Quality of sound is what matters, and the Jam’s 24-bit, 48kHz digital converters don’t disappoint. If that isn’t good enough for you, there’s a 96kHz version.

All of this sonic goodness does come at a higher price, however; a 48kHz Jam is typically over £70, and the 96Khz is over £100. If you want to go slightly cheaper, check out the SonicPort from Line 6.

Hands Free

Using iOS live has meant having one's hands more on a screen than perhaps many guitarists would like; this becomes increasingly applicable if you use lots of different sounds.

Bluetooth-based foot controllers, however, exploit the fact that many apps accept MIDI commands. Consequently, switching pedals onscreen becomes no different to that of kicking old-school stompboxes on and off.

As for what’s available, the iRig BlueBoard from IK Multimedia has lots to recommend it; four backlit footpads, expression pedal inputs, fits in a gig bag, and works with most iDevices, as well as Mac.

If plastic seems flimsy, check out the all-metal BT-4 from Positive Grid; more expensive, but more robust.

Conclusion

In looking at the core of a live set-up, I have shown you that

  • An iPad is a preferable option
  • An iPhone or iPod Touch are be cheaper and aren’t necessarily a poorer choice
  • Go for the highest specification you can afford
  • Apple Refurbished models can save you money
  • A dock interface is the only serious choice for sound quality
  • A Bluetooth foot controller brings familiar analogue functionality

In the next tutorial, I’ll show you how to mount and connect equipment effectively for a live environment.


How to Design a New Brand Identity for Your Business

Let's Go: Command-Line Programs With Golang

Let's Go: Command-Line Programs With Golang

Overview

The Go Language is an exciting new language that gains a lot of popularity for a good reason. In this tutorial you'll learn how to write command-line programs with Go. The sample program is called multi-git, and it allows you to execute git commands on multiple repositories at the same time.

Quick Introduction to Go

Go is an open-source C-like language created at Google by some of the original C and Unix hackers, who were motivated by their dislike of C++. It shows in Go's design, which made several unorthodox choices such as eschewing implementation inheritance, templates, and exceptions. Go is simple, reliable, and efficient. Its most distinctive feature is its explicit support for concurrent programming via so-called goroutines and channels.

Before starting to dissect the sample program, follow the official guide to get ready for Go development.

The Multi-Git Program

The multi-git program is a simple but useful Go program. If you work on a team where the codebase is split across multiple git repositories then you often need to perform changes across multiple repositories. This is a problem because git has no concept of multiple repositories. Everything revolves around a single repository. 

This becomes especially troublesome if you use branches. If you work on a feature that touches three repositories then you will have to create a feature branch in each of these repositories and then remember to check out, pull, push, and merge all of them at the same time. This is not trivial. Multi-git manages a set of repositories and lets you operate on the whole set at once. Note that the current version of multi-git requires that you create the branches individually, but I may add this feature at a later date.

By exploring the way multi-git is implemented, you will learn a lot about writing command-line programs in Go.

Packages and Imports

Go programs are organized in packages. The multi-git program consists of a single file called main.go. At the top of the file, the package name 'main' is specified, followed by a list of imports. The imports are other packages that are used by multi-git.

For example, the fmt package is used for formatted I/O similar to C's printf and scanf. Go supports installing packages from a variety of sources via the go get command. When you install packages, they end up in a namespace under the $GOPATH environment variable. You can install packages from a variety of sources such as GitHub, Bitbucket, Google code, Launchpad, and even IBM DevOps services via several common version control formats such as git, subversion, mercurial and bazaar.

Command-Line Arguments

Command-line arguments are one of the most common forms of providing input to programs. They are easy to use, allow you to run and configure the program in one line, and have great parsing support in many languages. Go calls them command-line "flags" and has the flag package for specifying and parsing command-line arguments (or flags). 

Typically, you parse command-line arguments at the beginning of your program, and multi-git follows this convention. The entry point is the main() function. The first two lines define two flags called "command" and "ignoreErrors". Each flag has a name, a data type, a default value, and a help string. The flag.Parse() call will parse the actual command-line passed to the program and will populate the defined flags.

It is also possible to access undefined arguments via the flag.Args() function. So, flags stand for pre-defined arguments and "args" are unprocessed arguments. The unprocessed arguments are 0-based indexed.

Environment Variables

Another common form of program configuration is environment variables. When you use environment variables, you may run the same program multiple times in the same environment, and all runs will use the same environment variables. 

Multi-git uses two environment variables: "MG_ROOT" and "MG_REPOS". Multi-git is designed to manage a group of git repositories that have a common parent directory. That's "MG_ROOT". The repository names are specified in "MG_REPOS" as a comma-separated string. To read the value of an environment variable you can use the os.Getenv() function.

Verifying the Repository List

Now that it found the root directory and the names of all the repositories, multi-git verifies that each repository exists under root and that it is really a git repository. The check is as simple as looking for a .git sub-directory for each repository directory.

First, an array of strings named "repos" is defined. Then it iterates over all the repo names and constructs a repository path by concatenating the root directory and the repo name. If the [os.Stat()]() call fails for the .git subdirectory, it logs the error and exits. Otherwise, the repository path is appended to the repos array.

Go has a unique error-handling facility where functions often return both a return value and an error object. Check out how os.Stat() returns two values. In this case the "_" placeholder is used to hold the actual result because you only care about the error. Go is very strict and requires named variables to be used. If you don't plan to use a value, you should assign it to "_" to avoid compilation error.

Executing Shell Commands

At this point, you have your list of repository paths where we want to execute the git command. As you recall, we received the git command line as a single command-line argument (flag) called "command". This needs to be split into an array of components (git command, sub-command, and options). The whole command as a string is stored too for display purposes.

Now, you're all set to iterate over each repository and execute the git command in each one. The "for ... range" loop construct is used again. First, multi-git changes its working directory to the current target repo "r" and prints the git command. Then it executes the command using the exec.Command() function and prints the combined output (both standard output and standard error). 

Finally, it checks if there was an error during execution. If there was an error and the ignoreErrors flag is false then multi-git bails out. The reason for optionally ignoring errors is that sometimes it's OK if commands fail on some repos. For example, if you want to check out a branch called "cool feature" on all the repositories that have this branch, you don't care if the checkout fails on repositories that don't have this branch.

Conclusion

Go is a simple yet powerful language. It's designed for large-scale system programming but works just fine for small command-line programs too. Go's minimal design is in stark contrast to other modern languages like Scale and Rust that are very powerful and well-designed too, but have a very steep learning curve. I encourage you to try Go and experiment. It's a lot of fun.


How to Create a Coloring Book Style Illustration in Adobe Illustrator

How to Perfectly Retouch Makeup for Beauty and Fashion Photography in 5 Steps

Here Is What to Look For When You Buy Photography Lenses

Tuesday, August 30, 2016

Creating a Low Poly Aeroplane Set for Games: Part 2

How to Run an Effective Brainstorming Session

How to Find a Great Job and Get Hired (In the Next 30 Days)

How to Create WordPress Pages With Hierarchy and Templates

How to Build a Responsive UI Component Using Element Queries

How to Create a Multi-Image Twitter Header Image With Adobe Photoshop

Installing AMP in WordPress

How to Draw a Car From Scratch

How to Add Custom Callouts to Screencast Videos in Screenflow

Monday, August 29, 2016

How to Stabilize Video in After Effects with ReelSteady

How to Stabilize Video in After Effects with ReelSteady

In this tutorial you will learn how to get started using the plug-in ReelSteady to stabilize footage in Adobe After Effects. We will take a look at a few common scenarios shot on various formats, such as a DSLR and GoPro cameras. You will also learn about the best settings to use on your camera before filming shots you plan on stabilizing in post.

What You Need

Besides After Effects, in order to follow along with this lesson you will need to download the plug-in ReelSteady. A free-trial version is available to download as well, so you can easily follow along with this tutorial and experiment using ReelSteady on your own footage.

Tips before you Shoot

  • Shoot with a high shutter speed (I usually shoot around 200 to 320.)
  • Use a wide-angle lens. A wide-angle or fish-eye lens work best, but if you don't have one of those, just shoot with the widest lens you have available.
  • Shoot in 4K, or the highest resolution you can. More pixels means more tracking data and less of a chance for pixelization to occur.
  • Film using the 'Tripod Trick'. If you have a tripod, attached it to the bottom of your camera and hold it while moving or walking. This will help prevent any high frequency vibration that can cause excessive rolling shutter artifacts.
  • Be aware of parallax between objects in your scene.

How to Use ReelSteady to Stabilize Footage in Adobe After Effects

Links Mentioned





5 Inspirational Baby Photographs and How to Make Your Own

How to Set Up a Gmail (Out of Office) Vacation Responder Email

Serialization and Deserialization of Python Objects: Part 2

Serialization and Deserialization of Python Objects: Part 2

This is part two of a tutorial on serializing and deserializing Python objects. In part one, you learned the basics and then dove into the ins and outs of Pickle and JSON. 

In this part you'll explore YAML (make sure to have the running example from part one), discuss performance and security considerations, get a review of additional serialization formats, and finally learn how to choose the right scheme.

YAML

YAML is my favorite format. It is a human-friendly data serialization format. Unlike Pickle and JSON, it is not part of the Python standard library, so you need to install it:

pip install yaml

The yaml module has only load() and dump() functions. By default they work with strings like loads() and dumps(), but can take a second argument, which is an open stream and then can dump/load to/from files.

Note how readable YAML is compared to Pickle or even JSON. And now for the coolest part about YAML: it understands Python objects! No need for custom encoders and decoders. Here is the complex serialization/deserialization using YAML:

As you can see, YAML has its own notation to tag Python objects. The output is still very human readable. The datetime object doesn't require any special tagging because YAML inherently supports datetime objects. 

Performance

Before you start thinking of performance, you need to think if performance is a concern at all. If you serialize/deserialize a small amount of data relatively infrequently (e.g. reading a config file at the beginning of a program) then performance is not really a concern and you can move on.

But, assuming you profiled your system and discovered that serialization and/or deserialization are causing performance issues, here are the things to address.

The are two aspects for performance: how fast can you serialize/deserialize, and how big is the serialized representation?

To test the performance of the various serialization formats, I'll create a largish data structure and serialize/deserialize it using Pickle, YAML, and JSON. The big_data list contains 5,000 complex objects.

Pickle

I'll use IPython here for its convenient %timeit magic function that measures execution times.

The default pickle takes 83.1 milliseconds to serialize and 29.2 milliseconds to deserialize, and the serialized size is 747,328 bytes.

Let's try with the highest protocol.

Interesting results. The serialization time shrank to only 21.2 milliseconds, but the deserialization time increased a little to 25.2 milliseconds. The serialized size shrank significantly to 394,350 bytes (52%).

JSON

Ok. Performance seems to be a little worse than Pickle for encoding, but much, much worse for decoding: 6 times slower. What's going on? This is an artifact of the object_hook function that needs to run for every dictionary to check if it needs to convert it to an object. Running without the object hook is much faster.

The lesson here is that when serializing and deserializing to JSON, consider very carefully any custom encodings because they may have a major impact on the overall performance.

YAML

Ok. YAML is really, really slow. But, note something interesting: the serialized size is just 200,091 bytes. Much better than both Pickle and JSON. Let's look inside real quick:

YAML is being very clever here. It identified that all 5,000 dicts share the same value for the 'a' key, so it stores it only once and references it using *id001 for all objects.

Security

Security is an often a critical concern. Pickle and YAML, by virtue of constructing Python objects, are vulnerable to code execution attacks. A cleverly formatted file can contain arbitrary code that will be executed by Pickle or YAML. There is no need to be alarmed. This is by design and is documented in Pickle's documentation:

Warning: The pickle module is not intended to be secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

As well as in YAML's documentation:

Warning: It is not safe to call yaml.load with any data received from an untrusted source! yaml.load is as powerful as pickle.load and so may call any Python function.

You just need to understand that you shouldn't load serialized data received from untrusted sources using Pickle or YAML. JSON is OK, but again if you have custom encoders/decoders than you may be exposed, too.

The yaml module provides the yaml.safe_load() function that will load only simple objects, but then you lose a lot of YAML's power and maybe opt to just use JSON.

Other Formats

There are many other serialization formats available. Here are a few of them.

Protobuf

Protobuf, or protocol buffers, is Google's data interchange format. It is implemented in C++ but has Python bindings. It has a sophisticated schema and packs data efficiently. Very powerful, but not very easy to use.

MessagePack

MessagePack is another popular serialization format. It is also binary and efficient, but unlike Protobuf it doesn't require a schema. It has a type system that's similar to JSON, but a little richer. Keys can be any type, and not just strings and non-UTF8 strings are supported.

CBOR

CBOR stands for Concise Binary Object Representation. Again, it supports the JSON data model. CBOR is not as well-known as Protobuf or MessagePack but is interesting for two reasons: 

  1. It is an official Internet standard: RFC 7049.
  2. It was designed specifically for the Internet of Things (IoT).

How to Choose?

This is the big question. With so many options, how do you choose? Let's consider the various factors that should be taken into account:

  1. Should the serialized format be human-readable and/or human-editable?
  2. Is serialized content going to be received from untrusted sources?
  3. Is serialization/deserialization a performance bottleneck?
  4. Does serialized data need to be exchanged with non-Python environments?

I'll make it very easy for you and cover several common scenarios and which format I recommend for each one:

Auto-Saving Local State of a Python Program

Use pickle (cPickle) here with the HIGHEST_PROTOCOL. It's fast, efficient and can store and load most Python objects without any special code. It can be used as a local persistent cache also.

Configuration Files

Definitely YAML. Nothing beats its simplicity for anything humans need to read or edit. It's used successfully by Ansible and many other projects. In some situations, you may prefer to use straight Python modules as configuration files. This may be the right choice, but then it's not serialization, and it's really part of the program and not a separate configuration file.

Web APIs

JSON is the clear winner here. These days, Web APIs are consumed most often by JavaScript web applications that speak JSON natively. Some Web APIs may return other formats (e.g. csv for dense tabular result sets), but I would argue that you can package csv data into JSON with minimal overhead (no need to repeat each row as an object with all the column names). 

High-Volume / Low-Latency Large-Scale Communication

Use one of the binary protocols: Protobuf (if you need a schema), MessagePack, or CBOR. Run your own tests to verify the performance and the representative power of each option.

Conclusion

Serialization and deserialization of Python objects is an important aspect of distributed systems. You can't send Python objects directly over the wire. You often need to interoperate with other systems implemented in other languages, and sometimes you just want to store the state of your program in persistent storage. 

Python comes with several serialization schemes in its standard library, and many more are available as third-party modules. Being aware of all the options and the pros and cons of each one will let you choose the best method for your situation.


Building a WordPress-Powered Front End With the WP REST API and AngularJS: The Posts, Categories, and Users Controllers

How to Create a Fantasy Fairy Photo Manipulation With Adobe Photoshop

Android From Scratch: Hardware Sensors

How to Create a Pokémon Themed Icon Pack in Adobe Illustrator

An Introduction to Remote Usability Testing

Thursday, August 25, 2016

Serialization and Deserialization of Python Objects: Part 1

Serialization and Deserialization of Python Objects: Part 1

Python object serialization and deserialization is an important aspect of any non-trivial program. If in Python you save something to a file, if you read a configuration file, or if you respond to an HTTP request, you do object serialization and deserialization. 

In one sense, serialization and deserialization are the most boring things in the world. Who cares about all the formats and protocols? You just want to persist or stream some Python objects and get them back later intact. 

This is a very healthy way to look at the world at the conceptual level. But, at the pragmatic level, which serialization scheme, format or protocol you choose may determine how fast your program runs, how secure it is, how much freedom you have to maintain your state, and how well you're going to interoperate with other systems. 

The reason there are so many options is that different circumstances call for different solutions. There is no "one size fits all". In this two-part tutorial I'll go over the pros and cons of the most successful serialization and deserialization schemes, show how to use them, and provide guidelines for choosing between them when faced with a specific use case.

Running Example

In the following sections I'll serialize and deserialize the same Python object graphs using different serializers. To avoid repetition, I'll define these object graphs here.

Simple Object Graph

The simple object graph is a dictionary that contains a list of integers, a string, a float, a boolean, and a None.

Complex Object Graph

The complex object graph is also a dictionary, but it contains a datetime object and user-defined class instance that has a self.simple attribute, which is set to the simple object graph.

Pickle

Pickle is a staple. It is a native Python object serialization format. The pickle interface provides four methods: dump, dumps, load, and loads. The dump() method serializes to an open file (file-like object). The dumps() method serializes to a string. The load() method deserializes from an open file-like object. The loads() method deserializes from a string.

Pickle supports by default a textual protocol, but has also a binary protocol, which is more efficient, but not human-readable (helpful when debugging).

Here is how you pickle a Python object graph to a string and to a file using both protocols.

The binary representation may seem larger, but this is an illusion due to its presentation. When dumping to a file, the textual protocol is 130 bytes, while the binary protocol is only 85 bytes.

Unpickling from a string is as simple as:

Note that pickle can figure out the protocol automatically. There is no need to specify a protocol even for the binary one.

Unpickling from a file is just as easy. You just need to provide an open file.

According to the documentation, you're supposed to open binary pickles using the 'rb' mode, but as you can see it works either way.

Let's see how pickle deals with the complex object graph.

The efficiency of the binary protocol is even greater with complex object graphs.

JSON

JSON (JavaScript Object Notation) has been part of the Python standard library since Python 2.5. I'll consider it a native format at this point. It is a text-based format and is the unofficial king of the web as far as object serialization goes. Its type system naturally models JavaScript, so it is pretty limited. 

Let's serialize and deserialize the simple and complex objects graphs and see what happens. The interface is almost identical to the pickle interface. You have dump(), dumps(), load(), and loads() functions. But, there are no protocols to select, and there are many optional arguments to control the process. Let's start simple by dumping the simple object graph without any special arguments:

The output looks pretty readable, but there is no indentation. For a larger object graph, this can be a problem. Let's indent the output:

That looks much better. Let's move on to the complex object graph.

Whoa! That doesn't look good at all. What happened? The error message is that the A object is not JSON serializable. Remember that JSON has a very limited type system and it can't serialize user defined classes automatically. The way to address it is to subclass the JSONEncoder class used by the json module and implement the default() that is called whenever the JSON encoder runs into an object it can't serialize. 

The job of the custom encoder is to convert it to a Python object graph that the JSON encoder is able to encode. In this case we have two objects that require special encoding: the datetime object and the A class. The following encoder does the job. Each special object is converted to a dict where the key is the name of the type surrounded by dunders (double underscores). This will be important for decoding. 

Let's try again with our custom encoder:

This is beautiful. The complex object graph was serialized properly, and the original type information of the components was retained via the keys: "__A__" and "__datetime__". If you use dunders for your names, then you need to come up with a different convention to denote special types.

Let's decode the complex object graph.

Hmmm, the deserialization worked (no errors), but it is different than the original complex object graph we serialized. Something is wrong. Let's take a look at the deserialized object graph. I'll use the pprint function of the pprint module for pretty printing.

Ok. The problem is that the json module doesn't know anything about the A class or even the standard datetime object. It just deserializes everything by default to the Python object that matches its type system. In order to get back to a rich Python object graph, you need custom decoding. 

There is no need for a custom decoder subclass. The load() and loads() functions provide the "object_hook" parameter that lets you provide a custom function that converts dicts to objects. 

Let's decode using the decode_object() function as a parameter to the loads() object_hook parameter.

Conclusion

In part one of this tutorial, you've learned about the general concept of serialization and deserialization of Python objects and explored the ins and out of serializing Python objects using Pickle and JSON. 

In part two, you'll learn about YAML, performance and security concerns, and a quick review of additional serialization schemes.


Optimize Your Mobile Application for Google

How to Create a Cute Baby Dragon Photo Manipulation in Adobe Photoshop

How to Use Social Media for Small Business (Beginner's Guide)

How to Create Easy Kawaii Animals in Adobe Illustrator

Everyday Astonishing: On Street Photography's Relationship to Chance

How to Build a Semi-Circle Donut Chart With CSS

Wednesday, August 24, 2016

Controlling a Mac From Afar With IFTTT and Dropbox

How to Make Video With Capto, a Lightweight Screencasting Tool for Mac

How to Draw a Flower

Creating a Low Poly Aeroplane Set for Games: Part 1

Non-ActiveRecord Models in Rails 4

How to Manage Team Projects Better With Quip

How to Increase Your Online Sales With Psychological Triggers

How to Create a Retro 90s Grunge Photo Effect in Adobe Photoshop

Tuesday, August 23, 2016

How to Design and Build a Material Design App

15 Best Business Proposal Templates: For New Client Projects

Extending the ProcessWire Admin Using Custom Modules

How to Improve Work-Life Balance in Your Small Business

History of Art: Mesopotamia

Generating PDFs From HTML With Rails

How to Know If You Need to Buy a Better DSLR Camera

How to Create an Easy Gold Glitter Text Effect in Adobe Photoshop

Monday, August 22, 2016

Usability Testing Tools for Quick and Early Feedback

How to Design a Krishna Janmashtami Postcard in Adobe Illustrator

How to Make Your First Customer Journey Map (Quick Guide)

Bridging React With Other Popular Web Languages

Bridging React With Other Popular Web Languages

React is a view library written in JavaScript, and so it is agnostic of any stack configuration and can make an appearance in practically any web application that is using HTML and JavaScript for its presentation layer.

As React works as the ‘V’ in ‘MVC’, we can create our own application stack from our preferences. So far in this guide we have seen React working with Express, a Node ES6/JavaScript-based framework. Other popular Node-based matches for React are the Meteor framework and Facebook’s Relay.

If you want to take advantage of React’s excellent component-based JSX system, the virtual DOM and its super-fast rendering times with your existing project, you can do so by implementing one of the many open-source solutions.

PHP

As PHP is a server-side scripting language, integration with React can come in several forms:

Server-Side Rendering

For rendering React components on the server, there is a library available on GitHub.

For example, we can do the following in PHP with this package:

The power of combining React with any server-side scripting language is the ability to feed React with data, and apply your business logic on the server as well as the client side. Renovating an old application into a Reactive app has never been easier!

Using PHP + Alto Router

For an example application, take a look at this repository on GitHub.

Configure your AltoRouter as so:

With the AltoRouter setup serving your application’s pages for the routes specified, you can then just include your React code inside the HTML markup and begin using your components.

JavaScript:

Ensure you include the React libraries and also place the HTML inside the body tag that will be served from your PHP AltoRouter app, for example:

Laravel Users

For the highly popular PHP framework Laravel, there is the react-laravel library, which enables React.js from right inside your Blade views.

For example:

The prerender flag tells Laravel to render the component on the server side and then mount it to the client side.

Example Laravel 5.2 + React App

Look at this excellent starter repository for an example of getting Laravel + React working by Spharian.

To render your React code inside your Laravel, set your React files’ source inside the index.blade.php body tag, by adding the following for example:

.NET

Using the ReactJS.NET framework, you can easily introduce React into your .NET application.

Install the ReactJS.NET package to your Visual Studio IDE via the NuGET package manager for .NET.

Search the available packages for ‘ReactJS.NET (MVC 4 and 5)’ and install. You will now be able to use any .jsx extension code in your asp.net app.

Add a new controller to your project to get started with React + .NET, and select “Empty MVC Controller” for your template. Once it is created, right click on return View() and add a new view with the following details:

  • View name: Index
  • View Engine: Razor (CSHTML)
  • Create a strongly-typed view: Unticked
  • Create as a partial view: Unticked
  • Use a layout or master page: Unticked

Now you can replace the default code with the following:

Now we need to create the Example.jsx referenced above, so create the file in your project and add your JSX as follows:

Now if you click Play in your Visual Studio IDE, you should see the Hello World comment box example.

Here’s a detailed tutorial on writing a component for asp.net.

Rails

By using react-rails, you can easily add React to any Rails (3.2+) application. To get started, just add the gem:

and install:

Now you can run the installation script:

This will result in two things:

  • A components.js manifest file in app/assets/javascripts/components/; this is where you will put all your components code.
  • Adding the following to your application.js:

Now .jsx code will be rendering, and you can add a block of React to your template, for example:

Ruby JSX

Babel is at the heart of the Ruby implementation of the react-rails gem, and can be configured as so:

Once react-rails is installed into your project, restart your server and any .js.jsx files will be transformed in your asset pipeline.

For more information on react-rails, go to the official documentation.

Python

To install python-react, use pip like so:

You can now render React code with a Python app by providing the path to your .jsx components and serving the app with a render server. Usually this is a separate Node.js process.

To run a render server, follow this easy short guide.

Now you can start your server as so:

Start your python application:

And load up http://127.0.0.1:5000 in a browser to see your React code rendering.

Django

Add react to your INSTALLED_APPS and provide some configuration as so:

Meteor

To add React to your meteor project, do so via:

Then in client/main.jsx add the following for example:

This is instantiating an App React component, which you will define in imports/ui/App.jsx, for example:

Inside the Headline.jsx, you use the following code:

Meteor is ready for React and has official documentation.

No More

An important point to note: When using Meteor with React, the default templating system is no longer used as it is defunct due to React being in the project.

So instead of using or Template.templateName for helpers and events in your JS, you will define everything in your View components, which are all subclasses of React.component.

Conclusion

React can be used in practically any language which utilises an HTML presentation layer. The benefits of React can be fully exploited by a plethora of potential software products.

React makes the UI View layer become component-based. Working logically with any stack means that we have a universal language for interface that designers across all facets of web development can utilise.

React unifies our projects’ interfaces, branding and general contingency across all deployments, no matter the device or platform restraints. Also in terms of freelance, client-based work or internally inside large organisations, React ensures reusable code for your projects.

You can create your own bespoke libraries of components and get working immediately inside new projects or renovate old ones, creating fully reactive isometric application interfaces quickly and easily.

React is a significant milestone in web development, and it has the potential to become an essential tool in any developer’s collection. Don’t get left behind.


Envato Turns 10 Today!

Designing, Wireframing & Prototyping an Android App: Part 2

How to Set Up Downloadable Products in OpenCart

Building a WordPress-Powered Front End With the WP REST API and AngularJS: Building a Custom Directive for Post Listing

Building a WordPress-Powered Front End With the WP REST API and AngularJS: Building a Custom Directive for Post Listing

In the previous part of the series, we bootstrapped our AngularJS application, configured routing for different views, and built services around routes for posts, users, and categories. Using these services, we are now finally able to fetch data from the server to power the front end.

In this part of the series, we will be working towards building a custom AngularJS directive for the post listing feature. In the current part of the series, we will:

  • introduce ourselves to AngularJS directives and why we should create one
  • plan the directive for the post listing feature and the arguments it will take
  • create a custom AngularJS directive for post listing along with its template

So let’s start by introducing ourselves to AngularJS directives and why we need them.

Introducing AngularJS Directives

Directives in AngularJS are a way to modify the behavior of HTML elements and to reuse a repeatable chunk of code. They can be used to modify the structure of an HTML element and its children, and thus they are a perfect way to introduce custom UI widgets.

While analyzing wireframes in the first part of the series, we noted that the post listing feature is being used in three views, namely:

  1. Post listing
  2. Author profile
  3. Category posts listing

So instead of writing separate functionality to list posts on all of these three pages, we can create a custom AngularJS directive that contains business logic to retrieve posts using the services we created in the earlier part of this series. Apart from business logic, this directive will also contain the rendering logic to list posts on certain views. It’s also in this directive that the functionality for post pagination and retrieving posts on certain criteria will be defined.

Hence, creating a custom AngularJS directive for the post listing feature allows us to define the functionality only in one place, and this will make it easier for us in the future to extend or modify this functionality without having to change the code in all three instances where it’s being used.

Having said that, let’s begin coding our custom directive for the post listing feature.

Planning the Custom AngularJS Directive for Post Listing

Before we begin writing any code for building the directive for the post listing feature, let’s analyze the functionality that’s needed in the directive.

At the very basic level, we need a directive that we could use on our views for post listing, author profile, and the category page. This means that we will be creating a custom UI widget (or a DOM marker) that we place in our HTML, and AngularJS will take care of the rest depending upon what options we provide for that particular instance of the directive.

Hence, we will be creating a custom UI widget identified by the following tag:

But we also need this directive to be flexible, i.e. to take arguments as input and act accordingly. Consider the user profile page where we only want posts belonging to that specific user to show up or the category page where posts belonging to that category will be listed. These arguments can be provided in the following two ways:

  1. In the URL as parameters
  2. Directly to the directive as an attribute value

Providing arguments in the URL seems native to the API as we are already familiar with doing so. Hence a user could retrieve a set of posts belonging to a specific user in the following way:

The above functionality can be achieved by using the $routeParams service provided by AngularJS. This is where we could access parameters provided by the user in the URL. We have already looked into it while registering routes in the previous part of the series.

As for providing arguments directly to the directive as an attribute value, we could use something like the following:

The post-args attribute in the above snippet takes arguments for retrieving a specific set of posts, and currently it’s taking the author ID. This attribute can take any number of arguments for retrieving posts as supported by the /wp/v2/posts route. So if we were to retrieve a set of posts authored by a user having an ID of 1 and belonging to a category of ID 10, we could do something like the following:

The filter[cat] parameter in the above code is used to retrieve a set of posts belonging to a certain category.

Pagination is also an essential feature when working with post listing pages. The directive will handle post pagination, and this feature will be driven by the values of the X-WP-Total and X-WP-TotalPages headers as returned by the server along with the response body. Hence, the user will be able to navigate back and forth between the previous and next sets of posts.

Having decided the nitty gritty of the custom directive for post listing, we now have a fairly solid foundation to begin writing the code.

Building a Custom Directive for Post Listing

Building a directive for the post listing feature includes two steps:

  1. Create the business logic for retrieving posts and handling other stuff.
  2. Create a rendering view for these posts to show up on the page.

The business logic for our custom directive will be handled in the directive declaration. And for rendering data on the DOM, we will create a custom template for listing posts. Let’s start with the directive declaration.

Directive Declaration

Directives in AngularJS can be declared for a module with the following syntax:

Here we are declaring a directive on our module using the .directive() method that’s available in the module. The method takes the name of the directive as the first argument, and this name is closely linked with the name of the element’s tag. Since we want our HTML element to be <post-listing></post-listing>, we provide a camel-case representation of the tag name. You can learn more about this normalization process performed by AngularJS to match directive names in the official documentation.

The notation we are using in the above code for declaring our directive is called safe-style of dependency injection. And in this notation, we provide an array of dependencies as the second argument that will be needed by the directive. Currently, we haven’t defined any dependencies for our custom directive. But since we need the Posts service for retrieving posts (that we created in the previous part of the series) and the native AngularJS’s $routeParams and $location services for accessing URL parameters and the current path, we define them as follows:

These dependencies are then made available to the function which is defined as the last element of the array. This function returns an object containing directive definition. Currently, we have two properties in the directive definition object, i.e. restrict and link.

The restrict option defines the way we use directive in our code, and there can be four possible values to this option:

  1. A: For using the directive as an attribute on an existing HTML element.
  2. E: For using the directive as an element name.
  3. C: For using the directive as a class name.
  4. M: For using the directive as an HTML comment.

The restrict option can also accept any combination of the above four values.

Since we want our directive to be a new element <post-listing></post-listing>, we set the restrict option to E. If we were to define the directive using the attributes on a pre-existing HTML element, then we could have set this option to A. In that case, we could use <div post-listing></div> to define the directive in our HTML code.

The second scope property is used to modify the scope of the directive. By default, the value of the scope property is false, meaning that the scope of the directive is the same as its parent’s. When we pass it an object, an isolated scope is created for the directive and any data that needs to be passed to the directive by its parent is passed through HTML attributes. This is what we are doing in our code, and the attribute we are using is post-args, which gets normalized into postArgs.

The postArgs property in the scope object can accept any of the following three values:

  1. =: Meaning that the value passed into the attribute would be treated as an object.
  2. @: Meaning that the value passed into the attribute would be treated as a plain string.
  3. &: Meaning that the value passed into the attribute would be treated as a function.

Since we have chosen to use the = value, any value that gets passed into the post-args attribute would be treated as a JSON object, and we could use that object as an argument for retrieving posts.

The third property, link, is used to define a function that is used to manipulate the DOM and define APIs and functions that are necessary for the directive. This function is where all the logic of the directive is handled.

The link function accepts arguments for the scope object, the directive’s HTML element, and an object for attributes defined on the directive’s HTML element. Currently, we are passing two arguments $scope and $elem for the scope object and the HTML element respectively.

Let’s define some variable on the $scope property that we will be using to render the post listing feature on the DOM.

Hence we have defined six properties on the $scope object that we could access in the DOM. These properties are:

  1. $posts: An array for holding post objects that will be returned by the server.
  2. $postHeaders: An object for holding the headers that will be returned by the server along with the response body. We will use these for handling navigation.
  3. $currentPage: An integer variable holding the current page number.
  4. $previousPage: A variable holding the previous page number.
  5. $nextPage: A variable holding the next page number.
  6. $routeContext: For accessing the current path using the $location service.

The postArgs property that we defined earlier for HTML attributes will already be available on the $scope object inside the directive.

Now we are ready to make a request to the server using the Posts service for retrieving posts. But before that, we must take into account the arguments provided by the user as URL parameters as well as the parameters provided in the post-args attribute. And for that purpose, we will create a function that uses the $routeParams service to extract URL parameters and merge them with the arguments provided through the post-args attribute:

The prepareQueryArgs() method in the above code uses the angular.merge() method, which extends the $scope.postArgs object with the $routeParams object. But before merging these two objects, it first deletes the id property from the $routeParams object using the delete operator. This is necessary since we will be using this directive on category and user views, and we don’t want the category and user IDs to get falsely interpreted as the post ID.

Having prepared query arguments, we are finally ready to make a call to the server and retrieve posts, and we do so with the Posts.query() method, which takes two arguments:

  1. An object containing arguments for making the query.
  2. A callback function that executes after the query has been completed.

So we will use the prepareQueryArgs() function for preparing an object for query arguments, and in the callback function, we set the values of certain variables on the $scope property:

The callback function gets passed two arguments for the response body and the response headers. These are represented by the data and headers arguments respectively.

The headers argument is a function that returns an object containing response headers by the server.

The remaining code is pretty self-explanatory as we are setting the value of the $scope.posts array. For setting the values of the $scope.previousPage and $scope.nextPage variables, we are using the x-wp-totalpages property in the postHeaders object.

And now we are ready to render this data on the front end using a custom template for our directive.

Creating a Custom Template for the Directive

The last thing we need to do in order to make our directive work is to make a separate template for post listing and link it to the directive. For that purpose, we need to modify the directive declaration and include a templateUrl property like the following:

This templateUrl property in the above code refers to a file named directive-post-listing.html in the views directory. So create this file in the views folder and paste in the following HTML code:

This is very basic HTML code representing a single post entry and post pagination. I’ve copied it from the views/listing.html file. We will use some AngularJS directives, including ng-repeat, ng-href, ng-src, and ng-bind-html, to display the data that currently resides in the $scope property of the directive.

Modify the HTML code to the following:

The above code uses the ng-repeat directive to iterate through the $scope.posts array. Any property that is defined on the $scope object in the directive declaration is available directly in the template. Hence, we refer to the $scope.posts array directly as posts in the template.

By using the ng-repeat directive, we ensure that the article.post-entry container will be repeated for each post in the posts array and each post is referred to as post in the inner loop. This post object contains data in the JSON format as returned by the server, containing properties like the post title, post ID, post content, and the featured image link, which is an additional field added by the companion plugin.

In the next step, we replace values like the post title, the post link, and the featured image link with properties in the post object.

For the pagination, replace the previous code with the following:

We first access the routeContext property, which we defined in our directive declaration, and suffix it with the ?page= parameter and use the values of the nextPage and previousPage variables to navigate back and forth between posts. We also check to see if the next page or the previous page link is not null, else we add a .disabled class to the button that is provided by Zurb Foundation.

Now that we've finished the directive, it's time to test it. And we do it by placing a <post-listing></post-listing> tag in our HTML, ideally right above the <footer></footer> tag. Doing so means that a post listing will appear above the page footer. Don’t worry about the formatting and styles as we will deal with them in the next part of the series.

So that’s pretty much it for creating a custom AngularJS directive for the post listing feature.

What’s Up Next?

In the current part of the series about creating a front end with the WP REST API and AngularJS, we built a custom AngularJS directive for the post listing feature. This directive uses the Posts service that we created in the earlier part of the series. The directive also takes user input in the form of an HTML attribute and through URL parameters.

In the concluding part of the series, we will begin working on the final piece of our project, i.e. controllers for posts, users, and categories, and their respective templates.


50+ Time-Saving Print Templates for Adobe InDesign & Photoshop

Friday, August 19, 2016

New Course: Essential JS Libraries for Web Typography

New Course: Digitally Paint Fantastic Giants Walking the Earth

International Artist Feature: Malaysia

Let's Go: Object-Oriented Programming in Golang

Let's Go: Object-Oriented Programming in Golang

Go is a strange mix of old and new ideas. It has a very refreshing approach where it isn't afraid to throw away established notions of "how to do things". Many people are not even sure if Go is an object-oriented language. Let me put that to rest right now. It is! 

In this tutorial you'll learn about all the intricacies of object-oriented design in Go, how the pillars of object-oriented programming like encapsulation, inheritance, and polymorphism are expressed in Go, and how Go compares to other languages.

The Go Design Philosophy

Go's roots are based on C and more broadly on the Algol family. Ken Thompson half-jokingly said that Rob Pike, Robert Granger and himself got together and decided they hate C++. Whether it's a joke or not, Go is very different from C++. More on that later. Go is about ultimate simplicity. This is explained in detail by Rob Pike in Less is exponentially more.

Go vs. Other Languages

Go has no classes, no objects, no exceptions, and no templates. It has garbage collection and built-in concurrency. The most striking omission as far as object-oriented is concerned is that there is no type hierarchy in Go. This is in contrast to most object-oriented languages like C++, Java, C#, Scala, and even dynamic languages like Python and Ruby.

Go Object-Oriented Language Features

Go has no classes, but it has types. In particular, it has structs. Structs are user-defined types. Struct types (with methods) serve similar purposes to classes in other languages.

Structs

A struct defines state. Here is a Creature struct. It has a Name field and a boolean flag called Real, which tells us if it's a real creature or an imaginary creature. Structs hold only state and no behavior.

Methods

Methods are functions that operate on particular types. They have a receiver clause that mandates what type they operate on. Here is a Dump() method that operates on Creature structs and prints their state:

This is an unusual syntax, but it is very explicit and clear (unlike the implicit "this" or Python's confusing "self").

Embedding

You can embed anonymous types inside each other. If you embed a nameless struct then the embedded struct provides its state (and methods) to the embedding struct directly. For example, the FlyingCreature has a nameless Creature struct embedded in it, which means a FlyingCreature is a Creature.

Now, if you have an instance of a FlyingCreature, you can access its Name and Real attributes directly.

Interfaces

Interfaces are the hallmark of Go's object-oriented support. Interfaces are types that declare sets of methods. Similarly to interfaces in other languages, they have no implementation. 

Objects that implement all the interface methods automatically implement the interface. There is no inheritance or subclassing or "implements" keyword. In the following code snippet, type Foo implements the Fooer interface (by convention, Go interface names end with "er").

Object-Oriented Design: The Go Way

Let's see how Go measures up against the pillars of object-oriented programming: encapsulation, inheritance, and polymorphism. Those are features of class-based programming languages, which are the most popular object-oriented programming languages.

At the core, objects are language constructs that have state and behavior that operates on the state and selectively exposes it to other parts of the program. 

Encapsulation

Go encapsulates things at the package level. Names that start with a lowercase letter are only visible within that package. You can hide anything in a private package and just expose specific types, interfaces, and factory functions. 

For example, here to hide the Foo type above and expose just the interface you could rename it to lower case foo and provide a NewFoo() function that returns the public Fooer interface:

Then code from another package can use NewFoo() and get access to a Fooer interface implemented by the internal foo type:

Inheritance

Inheritance or subclassing was always a controversial issue. There are many problems with implementation inheritance (as opposed to interface inheritance). Multiple inheritance as implemented by C++ and Python and other languages suffers from the deadly diamond of death problem, but even single inheritance is no picnic with the fragile base-class problem. 

Modern languages and object-oriented thinking now favor composition over inheritance. Go takes it to heart and doesn't have any type hierarchy whatsoever. It allows you to share implementation details via composition. But Go, in a very strange twist (that probably originated from pragmatic concerns), allows anonymous composition via embedding. 

For all intents and purposes, composition by embedding an anonymous type is equivalent to implementation inheritance. An embedded struct is just as fragile as a base class. You can also embed an interface, which is equivalent to inheriting from an interface in languages like Java or C++. It can even lead to a runtime error that is not discovered at compile time if the embedding type doesn't implement all the interface methods. 

Here SuperFoo embeds the Fooer interface, but doesn't implement its methods. The Go compiler will happily let you create a new SuperFoo and call the Fooer methods, but will obviously fail at runtime. This compiles:

Running this program results in a panic:

Polymorphism

Polymorphism is the essence of object-oriented programming: the ability to treat objects of different types uniformly as long as they adhere to the same interface. Go interfaces provide this capability in a very direct and intuitive way. 

Here is an elaborate example where multiple creatures (and a door!) that implement the Dumper interface are created and stored in a slice and then the Dump() method is called for each one. You'll notice different styles of instantiating the objects too.

Conclusion

Go is a bona fide object-oriented programming language. It enables object-based modeling and promotes the best practice of using interfaces instead of concrete type hierarchies. Go made some unusual syntactic choices, but overall working with types, methods, and interfaces feels simple, lightweight, and natural. 

Embedding is not very pure, but apparently pragmatism was at work, and embedding was provided instead of only composition by name.