Enabling you to develop the best web. tools and method for you. HTML ,CSS ,CODE ,websigns ,,webdevelopment
Monday, October 31, 2016
Sunday, October 30, 2016
Friday, October 28, 2016
Thursday, October 27, 2016
Wednesday, October 26, 2016
Tuesday, October 25, 2016
Monday, October 24, 2016
Sunday, October 23, 2016
The Portable Guitarist—Transporting and Mounting an iOS Rig
So far in this series, I’ve shown you the advantages of using an iOS-based live rig, as well the tech most suited to the job.
In this tutorial, I’ll show you how to safely transporting the gear, and how to ensure you have it to hand in a live environment when you need it.
Safety First
I'll be honest: the iPad’s amazing, but it’s still more fragile—and sometimes more expensive—than a typical piece of a guitarist’s gigging equipment. A guitar or amp might survive being knocked over, but an iPad’s less likely to.
Furthermore it’s easier to steal, and often more in demand, than a guitar. So choose a gig wisely. There'll be a difference between performing on a raised stage or playing in a rough pub where the audience keep clattering into you.
Here are some things to think about.
Surviving the Journey
You’ve got to get the iPad, interface, and cables to and from the gig. Chucking it all into your guitar’s gig bag’s an option, and attractive if you need to travel light, but think about the level of protection offered.
As the screen’s the most vulnerable part, consider the Smart Cover from Apple; it clips magnetically to the device, and prices typically start from around £40.
I’m a great believer in rarely relying on a single solution. My iPad’s fitted with a Belkin Snap Case. It’s a thick covering for the back, costing less than £10. It then goes into a Belkin Pleated Sleeve, a padded, double-zipped pouch for under £15. Both items have easily survived several years of daily usage.
In terms of transporting to and from gigs, I use a shoulder bag; this carries the iPad, interface, plus a wealth of cabling. I’ve opted for this as it’s hands free (like I haven’t got enough to carry at a gig), and occupies little space in the car (an important consideration).
It’s also easier to keep it with me, alleviating concerns regarding theft. Alternatively, an aluminium flight case could be bought from a retailer, such as Maplin, for as little as £25. These usually contain foam, which could be sculpted to your devices accordingly.
Mounting Options for a Device
Unless you’ve a friendly/cheap guitar tech to hand, chances are you’re going to have to operate the iPad during the gig. This means having it within reach. There are several options available to you.
Desktop
If the gig’s small, you could place the iPad in a stand atop a speaker cabinet or table. For this, there’s the iKlip Studio from IK Multimedia, available for under £30. It takes your device either in portrait or landscape, has a place for attaching one of their iRig interfaces, and folds away almost flat—save for the lip on which the device sits.
Whilst they’re very useful—I own one—I’d only recommend it for home use, or small gigs where there’s no possibility of the audience getting anywhere near it. The potential for damage is too great. Fine for an intimate, seated singer-songwriter type of gig, but little else.
Stands
There’s the age-old route of the small folding music stand. They’re designed to be extremely portable, and are REALLY cheap—often less than £10. There’s nothing, however, to stop any sideways movement of the device.
Furthermore, their portability means that they’re flimsy—you only need the main wing nut to fail, and the iPad will be sailing backwards very quickly.
If the music stand appeals, but robustness concerns you, the opposite end of the spectrum is the lectern-style stand. These usually comprise folding feet and telescopic tubes more typically associated with speaker cabinet stands.
A large lipped top provides plenty of support and surface area, giving you somewhere to safely put both your device and interface. Many are also perforated, so you shouldn’t worry about the device overheating. Price-wise, you can spend little or lots, but £20 will get you the type of orchestral music stands you see in schools.
Whilst sturdier, their big disadvantage is portability, or lack thereof. The feet and the tubing are collapsable, but the actual top isn’t, so what you gain in surface area costs you in terms of having to cart it around.
Cradles
As I already use a microphone stand, I prefer a cradle that attaches to it. This means that there’s little extra to bring, it’s very portable, occupies no additional floor space, plus associated cabling can run down the mic stand.
There are many from which to choose, and they don’t have to be expensive; indeed, some are under £15. However, you get what you pay for, and I’d rather not trust the well-being of an expensive device by going for the cheapest one available.
My choice is the iPad stand from König & Meyer (also known as K&M). The cradle’s far thicker than many of its competitors, plus the fixings are reassuringly industrial. Choose from one that screws directly onto the thread of a mic stand—although you can’t now attach a mic—or one that mounts on an arm that clamps lower down the stand.
The cradle itself can orient either landscape or portrait, and can also be angled back, so exact positioning is achievable. I prefer waist height. The interface can fix to the arm. I use Velcro. Best of all, such choice and sturdiness only costs £20 to £30.
Conclusion
Using iOS live is a different proposition to home usage, so think about the following:
- Choose your gig appropriately
- How the device will be transported
- How the device will be protected
- Balance portability with robustness
- Consider the footprint a stand will occupy
- A desktop stand’s only appropriate for quiet, intimate gigs
- A folding stand’s cheap and portable, but flimsy
- Lectern stands are robust, but bulky
- A cradle and clamp is a compact, safe, portable solution
In the next tutorial, I’ll cover output connections and cabling, plus best choices of amplification.
Saturday, October 22, 2016
Friday, October 21, 2016
Thursday, October 20, 2016
Wednesday, October 19, 2016
Building RESTful APIs With Flask: ORM Independent
In the first part of this three-part tutorial series, we saw how to write RESTful APIs all by ourselves using Flask as the web framework. In the second part, we created a RESTful API using Flask-Restless which depends on SQLAlchemy as the ORM. In this part, we will use another Flask extension, Flask-Restful, which abstracts your ORM and does not make any assumptions about it.
I will take the same sample application as in the last part of this series to maintain context and continuity. Although this example application is based on SQLAlchemy itself, this extension can be used along with any ORM in a similar fashion, as shown in this tutorial.
Installing Dependencies
While continuing with the application from the first part, we need to install only one dependency:
$ pip install Flask-Restful
The Application
Before we start, you might want to remove the code that we wrote for the second part of this tutorial series for more clarity.
As always, we will start with changes to our application's configuration, which will look something like the following lines of code:
flask_app/my_app/__init__.py
from flask.ext.restful import Api api = Api(app)
Just adding the above couple of lines to the existing code should suffice.
flask_app/my_app/catalog/views.py
import json
from flask import Blueprint, abort
from flask.ext.restful import Resource
from flask.ext.restful import reqparse
from my_app.catalog.models import Product
from my_app import api, db
catalog = Blueprint('catalog', __name__)
parser = reqparse.RequestParser()
parser.add_argument('name', type=str)
parser.add_argument('price', type=float)
@catalog.route('/')
@catalog.route('/home')
def home():
return "Welcome to the Catalog Home."
class ProductApi(Resource):
def get(self, id=None, page=1):
if not id:
products = Product.query.paginate(page, 10).items
else:
products = [Product.query.get(id)]
if not products:
abort(404)
res = {}
for product in products:
res[product.id] = {
'name': product.name,
'price': product.price,
}
return json.dumps(res)
def post(self):
args = parser.parse_args()
name = args['name']
price = args['price']
product = Product(name, price)
db.session.add(product)
db.session.commit()
res = {}
res[product.id] = {
'name': product.name,
'price': product.price,
}
return json.dumps(res)
api.add_resource(
ProductApi,
'/api/product',
'/api/product/<int:id>',
'/api/product/<int:id>/<int:page>'
)
Most of the code above is self-explanatory. I will highlight a few points, though. The code above seems very similar to the one that we wrote in the first part of this series, but here the extension used does a bunch of behind-the-scenes optimizations and provides a lot more features that can be leveraged.
Here the methods declared under any class that subclasses Resource are automatically considered for routing. Also, any parameters that we expect to receive along with incoming HTTP calls need to be parsed using reqparse.
Testing the Application
This application can be tested in exactly the same way as we did in the second part of this tutorial series. I have kept the routing URL the same for the same purpose.
Conclusion
In this last part of this three-part tutorial series on developing RESTful APIs with Flask, we saw how to write ORM-independent RESTful APIs. This wraps up the basics of writing RESTful APIs with Flask in various ways.
There is more that can be learned about each of the methods covered, and you can explore this on your own, using the basics you've learned in this series.
Tuesday, October 18, 2016
Let's Go: Testing Golang Programs
In this tutorial I will teach you all the basics of idiomatic testing in Go using the best practices developed by the language designers and the community. The main weapon will be the standard testing package. The target will be a sample program that solves a simple problem from Project Euler.
Square Sum Difference
The sum square difference problem is pretty simple: "Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum."
This particular problem can be solved rather concisely especially if you know your Gauss. For example, the sum of the first N natural numbers is (1 + N) * N / 2, and the sum of squares of the first N integers is: (1 + N) * (N * 2 + 1) * N / 6. So the whole problem can be solved by the following formula and assigning 100 to N:
(1 + N) * (N * 2 + 1) * N / 6 - ((1 + N) * N / 2) * ((1 + N) * N / 2)
Well, that's very specific, and there isn't much to test. Instead, I created some functions that are a little more general than what's needed for this problem, but can serve for other programs in the future (project Euler has 559 problems right now).
The code is available on GitHub.
Here are the signatures of the four functions:
// The MakeIntList() function returns an array of consecutive integers // starting from 1 all the way to the `number` (including the number) func MakeIntList(number int) []int // The squareList() function takes a slice of integers and returns an // array of the quares of these integers func SquareList(numbers []int) []int // The sumList() function takes a slice of integers and returns their sum func SumList(numbers []int) int // Solve Project Euler #6 - Sum square difference func Process(number int) int
Now, with our target program in place (please forgive me, TDD zealots), let's see how to write tests for this program.
The Testing Package
The testing package goes hand in hand with the go test command. Your package tests should go in files with the "_test.go" suffix. You can split your tests across several files that follow this convention. For example: "whatever1_test.go" and "whatever2_test.go". You should put your test functions in these test files.
Every test function is a publicly exported function whose name starts with "Test", accepts a pointer to a testing.T object, and returns nothing. It looks like:
func TestWhatever(t *testing.T) {
// Your test code goes here
}
The T object provides various methods you can use to indicate failure or record errors.
Remember: only test functions defined in test files will be executed by the go test command.
Writing Tests
Every test follows the same flow: set up the test environment (optional), feed the code under test input, capture the result, and compare it to the expected output. Note that inputs and results don't have to be arguments to a function.
If the code under test is fetching data from a database then the input will be making sure the database contains appropriate test data (which may involve mocking at various levels). But, for our application, the common scenario of passing input arguments to a function and comparing the result to the function output is sufficient.
Let's start with the SumList() function. This function takes a slice of integers and returns their sum. Here is a test function that verifies SumList() behaves as it should.
It tests two test cases, and if an expected output doesn't match the result, it calls the Error() method of the testing.T object.
func TestSumList_NotIdiomatic(t *testing.T) {
// Test []{} -> 0
result := SumList([]int{})
if result != 0 {
t.Error(
"For input: ", []int{},
"expected:", 0,
"got:", result)
}
// Test []{4, 8, 9} -> 21
result = SumList([]int{4, 8, 9})
if result != 21 {
t.Error(
"For input: ", []int{},
"expected:", 0,
"got:", result)
}
}
This is all straightforward, but it looks a little verbose. Idiomatic Go testing uses table-driven tests where you define a struct for pairs of inputs and expected outputs and then have a list of these pairs that you feed in a loop to the same logic. Here is how it is done for testing the SumList() function.
type List2IntTestPair struct {
input []int
output int
}
func TestSumList(t *testing.T) {
var tests = []List2IntTestPair{
{[]int{}, 0},
{[]int{1}, 1},
{[]int{1, 2}, 3},
{[]int{12, 13, 25, 7}, 57},
}
for _, pair := range tests {
result := SumList(pair.input)
if result != pair.output {
t.Error(
"For input: ", pair.input,
"expected:", pair.output,
"got:", result)
}
}
}
This is much better. It is easy to add more test cases. It's easy to have the full spectrum of test cases in one place, and if you decide to change the test logic you don't need to change multiple instances.
Here is another example for testing the SquareList() function. In this case, both the input and the output are slices of integers, so the test pair struct is different, but the flow is identical. One interesting thing here is that Go doesn't provide a built-in way to compare slices, so I use reflect.DeepEqual() to compare the output slice to the expected slice.
type List2ListTestPair struct {
input []int
output []int
}
func TestSquareList(t *testing.T) {
var tests = []List2ListTestPair{
{[]int{}, []int{}},
{[]int{1}, []int{1}},
{[]int{2}, []int{4}},
{[]int{3, 5, 7}, []int{9, 25, 49}},
}
for _, pair := range tests {
result := SquareList(pair.input)
if !reflect.DeepEqual(result, pair.output) {
t.Error(
"For input: ", pair.input,
"expected:", pair.output,
"got:", result)
}
}
}
Running Tests
Running tests is as simple as typing go test in your package directory. Go will find all the files with the "_test.go" suffix and all the functions with the "Test" prefix and run them as tests. Here is what it looks like when everything is OK:
(G)/project-euler/6/go > go test PASS ok _/Users/gigi/Documents/dev/github/project-euler/6/go 0.006s
Not very dramatic. Let me break a test on purpose. I'll change the test case for SumList() such that the expected output for summing 1 and 2 will be 7.
func TestSumList(t *testing.T) {
var tests = []List2IntTestPair{
{[]int{}, 0},
{[]int{1}, 1},
{[]int{1, 2}, 7},
{[]int{12, 13, 25, 7}, 57},
}
for _, pair := range tests {
result := SumList(pair.input)
if result != pair.output {
t.Error(
"For input: ", pair.input,
"expected:", pair.output,
"got:", result)
}
}
}
Now, when you type go test, you get:
(G)/project-euler/6/go > go test --- FAIL: TestSumList (0.00s) 006_sum_square_difference_test.go:80: For input: [1 2] expected: 7 got: 3 FAIL exit status 1 FAIL _/Users/gigi/Documents/dev/github/project-euler/6/go 0.006s
That states pretty well what happened and should give you all the information you need to fix the problem. In this case, the problem is that the test itself is wrong and the expected value should be 3. That's an important lesson. Don't automatically assume that if a test fails the code under test is broken. Consider the entire system, which includes the code under test, the test itself, and the test environment.
Test Coverage
To ensure your code works, it's not enough to have passing tests. Another important aspect is test coverage. Do your tests cover every statement in the code? Sometimes even that is not enough. For example, if you have a loop in your code that runs until a condition is met, you may test it successfully with a condition that works, but fail to notice that in some cases the condition may always be false, resulting in an infinite loop.
Unit Tests
Unit tests are like brushing your teeth and flossing. You shouldn't neglect them. They are the first barrier against problems and will let you have confidence in refactoring. They are also a boon when trying to reproduce issues and being able to write a failing test that demonstrates the issue that passes after you fix the issue.
Integration Tests
Integration tests are necessary as well. Think of them as visiting the dentist. You may be OK without them for a while, but if you neglect them for too long it won't be pretty.
Most non-trivial programs are made of multiple inter-related modules or components. Problems can often occur when wiring those components together. Integration tests give you confidence that your entire system is operating as intended. There are many other types of tests like acceptance tests, performance tests, stress/load tests and full-fledged whole system tests, but unit tests and integration tests are two of the foundational ways to test software.
Conclusion
Go has built-in support for testing, a well-defined way to write tests, and recommended guidelines in the form of table-driven tests.
The need to write special structs for every combination of inputs and outputs is a little annoying, but that's the price you pay for Go's simple by design approach.
Monday, October 17, 2016
Sunday, October 16, 2016
Saturday, October 15, 2016
Friday, October 14, 2016
Concurrency on Android with Service
In this tutorial we’ll explore the Service component and its superclass, the IntentService. You'll learn when and how to use this component to create great concurrency solutions for long-running background operations. We’ll also take quick look at IPC (Inter Process Communication), to learn how to communicate with services running on different processes.
To follow this tutorial you'll need some understanding of concurrency on Android. If you don’t know much about it, you might want to read some of our other articles about the topic first.
- Android SDKAndroid From Scratch: Background OperationsPaul Trebilcox-Ruiz
- AndroidUnderstanding AsyncTask Values in 60 SecondsPaul Trebilcox-Ruiz
- Android SDKUnderstanding Concurrency on Android Using HaMeRTin Megali
- Android SDKPractical Concurrency on Android With HaMeRTin Megali
1. The Service Component
The Service component is a very important part of Android's concurrency framework. It fulfills the need to perform a long-running operation within an application, or it supplies some functionality for other applications. In this tutorial we’ll concentrate exclusively on Service’s long-running task capability, and how to use this power to improve concurrency.
What is a Service?
A Service is a simple component that's instantiated by the system to do some long-running work that doesn't necessarily depend on user interaction. It can be independent from the activity life cycle and can also run on a complete different process.
Before diving into a discussion of what a Service represents, it's important to stress that even though services are commonly used for long-running background operations and to execute tasks on different processes, a Service doesn't represent a Thread or a process. It will only run in a background thread or on a different process if it's explicitly asked to do so.
A Service has two main features:
- A facility for the application to tell the system about something it wants to be doing in the background.
- A facility for an application to expose some of its functionality to other applications.
Services and Threads
There is a lot of confusion about services and threads. When a Service is declared, it doesn't contain a Thread. As a matter of fact, by default it runs directly on the main thread and any work done on it may potentially freeze an application. (Unless it's a IntentService, a Service subclass that already comes with a worker thread configured.)
So, how do services offer a concurrency solution? Well, a Service doesn't contain a thread by default, but it can be easily configured to work with its own thread or with a pool of threads. We'll see more about that below.
Disregarding the lack of a built-in thread, a Service is an excellent solution for concurrency problems in certain situations. The main reasons to choose a Service over other concurrency solutions like AsyncTask or the HaMeR framework are:
- A
Servicecan be independent of activity life cycles. - A
Serviceis appropriate for running long operations. - Services don't depend on user interaction.
- When running on different processes, Android can try to keep services alive even when the system is short on resources.
- A
Servicecan be restarted to resume its work.
Service Types
There are two types of Service, started and bound.
A started service is launched via Context.startService(). Generally it performs only one operation and it will run indefinitely until the operation ends, then it shuts itself down. Typically, it doesn't return any result to the user interface.
The bound service is launched via Context.bindService(), and it allows a two-way communication between client and Service. It can also connect with multiple clients. It destroys itself when there isn't any client connected to it.
To choose between those two types, the Service must implement some callbacks: onStartCommand() to run as a started service, and onBind() to run as a bound service. A Service may choose to implement only one of those types, but it can also adopt both at the same time without any problems.
2. Service Implementation
To use a service, extend the Service class and override its callback methods, according to the type of Service . As mentioned before, for started services the onStartCommand() method must be implemented and for bound services, the onBind() method. Actually, the onBind() method must be declared for either service type, but it can return null for started services.
public class CustomService extends Service {
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
// Execute your operations
// Service wont be terminated automatically
return Service.START_NOT_STICKY;
}
@Nullable
@Override
public IBinder onBind(Intent intent) {
// Creates a connection with a client
// using a interface implemented on IBinder
return null;
}
}
onStartCommand(): launched byContext.startService(). This is usually called from an activity. Once called, the service may run indefinitely and it's up to you to stop it, either callingstopSelf()orstopService().onBind(): called when a component wants to connect to the service. Called on the system byContext.bindService(). It returns anIBinderthat provides an interface to communicate with the client.
The service's life cycle is also important to take into consideration. The onCreate() and onDestroy() methods should be implemented to initialize and shut down any resources or operations of the service.
Declaring a Service on Manifest
The Service component must be declared on the manifest with the <service> element. In this declaration it's also possible, but not obligatory, to set a different process for the Service to run in.
<manifest ... >
...
<application ... >
<service
android:name=".ExampleService"
android:process=":my_process"/>
...
</application>
</manifest>
2.2. Working with Started Services
To initiate a started service you must call Context.startService() method. The Intent must be created with the Context and the Service class. Any relevant information or data should also be passed in this Intent.
Intent serviceIntent = new Intent(this, CustomService.class);
// Pass data to be processed on the Service
Bundle data = new Bundle();
data.putInt("OperationType", 99);
data.putString("DownloadURL", "http://ift.tt/2ebEbhI");
serviceIntent.putExtras(data);
// Starting the Service
startService(serviceIntent);
In your Service class, the method that you should be concerned about is the onStartCommand(). It's on this method that you should call any operation that you want to execute on the started service. You'll process the Intent to capture information sent by the client. The startId represents an unique ID, automatically created for this specific request and the flags can also contain extra information about it.
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
Bundle data = intent.getExtras();
if (data != null) {
int operation = data.getInt(KEY_OPERATION);
// Check what operation to perform and send a msg
if ( operation == OP_DOWNLOAD){
// make a download
}
}
return START_STICKY;
}
The onStartCommand() returns a constant int that controls the behavior:
Service.START_STICKY: Service is restarted if it gets terminated.Service.START_NOT_STICKY: Service is not restarted.Service.START_REDELIVER_INTENT: The service is restarted after a crash and the intents then processing will be redelivered.
As mentioned before, a started service needs to be stopped, otherwise it will run indefinitely. This can be done either by the Service calling stopSelf() on itself or by a client calling stopService() on it.
void someOperation() {
// do some long-running operation
// and stop the service when it is done
stopSelf();
}
Binding to Services
Components can create connections with services, establishing a two-way communication with them. The client must call Context.bindService(), passing an Intent, a ServiceConnection interface and a flag as parameters. A Service can be bound to multiple clients and it will be destroyed once it has no clients connected to it.
void bindWithService() {
Intent intent = new Intent(this, PlayerService.class);
// bind with Service
bindService(intent, mConnection, Context.BIND_AUTO_CREATE);
}
It's possible to send Message objects to services. To do it you'll need to create a Messenger on the client side in a ServiceConnection.onServiceConnected interface implementation and use it to send Message objects to the Service.
private ServiceConnection mConnection = new ServiceConnection() {
@Override
public void onServiceConnected(ComponentName className,
IBinder service) {
// use the IBinder received to create a Messenger
mServiceMessenger = new Messenger(service);
mBound = true;
}
@Override
public void onServiceDisconnected(ComponentName arg0) {
mBound = false;
mServiceMessenger = null;
}
};
It's also possible to pass a response Messenger to the Service for the client to receive messages. Watch out though, because the client may not no longer be around to receive the service's message. You could also use BroadcastReceiver or any other broadcast solution.
private Handler mResponseHandler = new Handler() {
@Override
public void handleMessage(Message msg) {
// handle response from Service
}
};
Message msgReply = Message.obtain();
msgReply.replyTo = new Messenger(mResponseHandler);
try {
mServiceMessenger.send(msgReply);
} catch (RemoteException e) {
e.printStackTrace();
}
It's important to unbind from the Service when the client is being destroyed.
@Override
protected void onDestroy() {
super.onDestroy();
// disconnect from service
if (mBound) {
unbindService(mConnection);
mBound = false;
}
}
On the Service side, you must implement the Service.onBind() method, providing an IBinder provided from a Messenger. This will relay a response Handler to handle the Message objects received from client.
IncomingHandler(PlayerService playerService) {
mPlayerService = new WeakReference<>(playerService);
}
@Override
public void handleMessage(Message msg) {
// handle messages
}
}
public IBinder onBind(Intent intent) {
// pass a Binder using the Messenger created
return mMessenger.getBinder();
}
final Messenger mMessenger = new Messenger(new IncomingHandler(this));
3 Concurrency Using Services
Finally, it's time to talk about how to solve concurrency problems using services. As mentioned before, a standard Service doesn't contain any extra threads and it will run on the main Thread by default. To overcome this problem you must add an worker Thread, a pool of threads or execute the Service on a different process. You could also use a subclass of Service called IntentService that already contains a Thread.
Making a Service Run on a Worker Thread
To make the Service execute on a background Thread you could just create an extra Thread and run the job there. However Android offers us a better solution. One way to take the best advantage of the system is to implement the HaMeR framework inside the Service, for example by looping a Thread with a message queue that can process messages indefinitely.
It's important to understand that this implementation will process tasks sequentially. If you need to receive and process multiple tasks at the same time, you should use a pool of threads. Using thread pools is out of the scope of this tutorial and we won't talk about it today.
To use HaMeR you must provide the Service with a Looper, a Handler and a HandlerThread.
private Looper mServiceLooper;
private ServiceHandler mServiceHandler;
// Handler to receive messages from client
private final class ServiceHandler extends Handler {
ServiceHandler(Looper looper) {
super(looper);
}
@Override
public void handleMessage(Message msg) {
super.handleMessage(msg);
// handle messages
// stopping Service using startId
stopSelf( msg.arg1 );
}
}
@Override
public void onCreate() {
HandlerThread thread = new HandlerThread("ServiceThread",
Process.THREAD_PRIORITY_BACKGROUND);
thread.start();
mServiceLooper = thread.getLooper();
mServiceHandler = new ServiceHandler(mServiceLooper);
}
If the HaMeR framework is unfamiliar to you, read our tutorials on HaMer for Android concurrency.
- Android SDKUnderstanding Concurrency on Android Using HaMeRTin Megali
- Android SDKPractical Concurrency on Android With HaMeRTin Megali
The IntentService
If there is no need for the Service to be kept alive for a long time, you could use IntentService, a Service subclass that's ready to run tasks on background threads. Internally, IntentService is a Service with a very similar implementation to the one proposed above.
To use this class, all you have to do is extend it and implement the onHandleIntent(), a hook method that will be called every time a client calls startService() on this Service. It's important to keep in mind that the IntentService will stop as soon as its job is completed.
public class MyIntentService extends IntentService {
public MyIntentService() {
super("MyIntentService");
}
@Override
protected void onHandleIntent(Intent intent) {
// handle Intents send by startService
}
}
IPC (Inter Process Communication)
A Service can run on a completely different Process, independently from all tasks that are happening on the main process. A process has its own memory allocation, thread group, and processing priorities. This approach can be really useful when you need to work independently from the main process.
Communication between different processes is called IPC (Inter Process Communication). In a Service there are two main ways to do IPC: using a Messenger or implementing an AIDL interface.
We've learned how to send and receive messages between services. All that you have to do is use create a Messenger using the IBinder instance received during the connection process and use it to send a reply Messenger back to the Service.
private Handler mResponseHandler = new Handler() {
@Override
public void handleMessage(Message msg) {
// handle response from Service
}
};
private ServiceConnection mConnection = new ServiceConnection() {
@Override
public void onServiceConnected(ComponentName className,
IBinder service) {
// use the IBinder received to create a Messenger
mServiceMessenger = new Messenger(service);
Message msgReply = Message.obtain();
msgReply.replyTo = new Messenger(mResponseHandler);
try {
mServiceMessenger.send(msgReply);
} catch (RemoteException e) {
e.printStackTrace();
}
}
The AIDL interface is a very powerful solution that allows direct calls on Service methods running on different processes and it's appropriate to use when your Service is really complex. However, AIDL is complicated to implement and it's rarely used, so its use won't be discussed in this tutorial.
4. Conclusion
Services can be simple or complex. It depends on the needs of your application. I tried to cover as much ground as possible on this tutorial, however I've focused just on using services for concurrency purposes and there are more possibilities for this component. I you want to study more, take a look at the documentation and Android guides.
See you soon!
Thursday, October 13, 2016
We Heart CodePen: the Most Popular Pens From Tuts+
CodePen is an invaluable tool–it helps us explain things and makes our front-end code tutorials all the more engaging. In recognition of that fact, let’s take a look at some pens from Tuts+ tutorials and courses which have really struck a chord with our community!
Building a Vertical Timeline
This tutorial by George Martsoukos takes an unordered list, displaying its items as a (responsive) vertical timeline. George then goes on to check whether the items have entered the viewport on scroll, animating them into place once that’s the case.
With over 21K views and 500 likes, this pen is one of the most popular we’ve posted!
Adding Appeal to Your Animations
Dublin-based Donovan knew exactly what you all wanted when he penned this one. Follow this beginner’s tutorial to learn not only about the practical aspects of coding CSS animation, but also the intangible crafting of “appeal” which goes along with it.
10 CSS3 Projects: UI and Layout
This course is hugely popular. Follow Kezz Bracey as she builds ten different CSS3 projects, all on CodePen, and all focused on UI and layout. Here’s one such project, where she demonstrates how to build a functional, animated tab UI, without a jot of JavaScript:
10 CSS3 Projects: Branding and Presentation
Kezz’s follow-up course took inspiration from “Branding and Presentation”, once more demonstrating how to build 10 CSS3 projects all from within the familiar surroundings of CodePen. This particular demo is a “PowerPoint” like presentation, again without any JavaScript at all.
An Overview of Material Design Lite
This tutorial was eagerly awaited by many of you, keen to transfer Google’s Material teachings to the web browser. Here’s just one of the pens from the tutorial, but one which has seen a good few thousand views. Click away!
Tips for Designing a Multilingual Website
I love this one. But then I would, as flag-bearer for the Tuts+ Translation Project, wouldn’t I? In any case, if you’ve never considered what unicode-bidi: embed; will do for your RTL web pages, maybe it’s time you checked out this popular pen!
Animated Coffee Drinking Sprite
Dennis did a great job of this one! Just try and resist scrolling.. If you’re interested in learning about ScrollMagic this is a really accessible tutorial to get you started. Grab a coffee and dive in.
3 GreenSock Projects
Many of Craig Campbell’s courses use CodePen as a way of setting up projects and seeing them through to completion. In this course he demonstrates a number of ways to use GreenSock’s Animation Platform (GSAP), including this popular mesmeric preloader:
6 Flexbox Projects for Web Designers
Another of Craig’s courses here, and one of our most viewed courses of the past few months. It teaches exactly what you’d expect, so if you’ve yet to dirty your hands with flexbox these projects (like this responsive image grid) will get you going!
Conclusion
What’s left to say? Enjoy the pens mentioned above, check out the tutorials and courses they were taken from, and make sure you follow Envato Tuts+ on CodePen!
Wednesday, October 12, 2016
Building RESTful APIs With Flask: An ORM With SQLAlchemy
In the first part of this three-part tutorial series, we saw how to write RESTful APIs all by ourselves using Flask as the web framework. The previous approach provided a whole lot of flexibility but also included writing a lot of code that otherwise could have been avoided in more generic cases.
In this part, we will use a Flask extension, Flask-Restless, which simply generates RESTful APIs for database models defined with SQLAlchemy. I will take the same sample application as in the last part of this series to maintain context and continuity.
Installing Dependencies
While continuing with the application from the first part, we need to install only one dependency:
$ pip install Flask-Restless
The Application
Flask-Restless makes adding RESTful API interfaces to models written with SQLAlchemy a piece of cake. First, add the REST APIManager from the flask.ext.restless extension to the application configuration file.
flask_app/my_app/__init__.py
from flask.ext.restless import APIManager manager = APIManager(app, flask_sqlalchemy_db=db)
Just adding the above couple of lines to the existing code should suffice.
flask_app/my_app/catalog/views.py
This file comprises the bulk of the changes from the previous part. Below is the complete rewritten file.
from flask import Blueprint
from my_app import manager
from my_app.catalog.models import Product
catalog = Blueprint('catalog', __name__)
@catalog.route('/')
@catalog.route('/home')
def home():
return "Welcome to the Catalog Home."
manager.create_api(Product, methods=['GET', 'POST'])
It is pretty self-explanatory how the above code would work. We just imported the manager that was created in a previous file, and it is used to create an API for the Product model with the listed methods. We can add more methods like DELETE, PUT, PATCH, etc. as needed.
Application in Action
Let's test this application by creating some products and listing them. The endpoint created by this extension by default is http://localhost:5000/api/product.
As I did in the last part of this tutorial series, I will test this using the requests library via terminal.
>>> import requests
>>> import json
>>> res = requests.get('http://ift.tt/2dc1P7E')
>>> res.json()
{u'total_pages': 0, u'objects': [], u'num_results': 0, u'page': 1}
>>> d = {'name': u'iPhone', 'price': 549.00}
>>> res = requests.post('http://ift.tt/2dc1P7E', data=json.dumps(d), headers={'Content-Type': 'application/json'})
>>> res.json()
{u'price': 549.0, u'id': 1, u'name': u'iPhone'}
>>> d = {'name': u'iPad', 'price': 649.00}
>>> res = requests.post('http://ift.tt/2dc1P7E', data=json.dumps(d), headers={'Content-Type': 'application/json'})
>>> res.json()
{u'price': 649.0, u'id': 2, u'name': u'iPad'}
>>> res = requests.get('http://ift.tt/2dc1P7E')
>>> res.json()
{u'total_pages': 1, u'objects': [{u'price': 549.0, u'id': 1, u'name': u'iPhone'}, {u'price': 649.0, u'id': 2, u'name': u'iPad'}], u'num_results': 2, u'page': 1}
How to Customize
It is really handy to have the RESTful APIs created automatically, but each application has some business logic which calls for customizations, validations, and clever/secure handling of requests as needed.
Here, request preprocessors and postprocessors come to the rescue. As the names signify, methods designated as preprocessors run before the processing of the request, and methods designated as postprocessors run after the processing of the request. create_api() is the place where they are defined as dictionaries of the request type (GET, POST, etc.) and the methods as list which will act as preprocessors or postprocessors on the specified request. Below is a template example:
manager.create_api(
Product,
methods=['GET', 'POST', 'DELETE'],
preprocessors={
'GET_SINGLE': ['a_preprocessor_for_single_get'],
'GET_MANY': ['another_preprocessor_for_many_get'],
'POST': ['a_preprocessor_for_post']
},
postprocessors={
'DELETE': ['a_postprocessor_for_delete']
}
)
The GET, PUT, and PATCH requests have the flexibility of being fired for single as well as multiple records; therefore, they have two types each. In the code above, notice GET_SINGLE and GET_MANY for GET requests.
The preprocessors and postprocessors accept different parameters for each type of request and work without any return value. This is left for you to try on your own.
Conclusion
In this part of this tutorial series, we saw how to create a RESTful API using Flask just by adding a couple of lines to a SQLAlchemy-based model.
In the next and last part of this series, I will cover how to create a RESTful API using another popular Flask extension, but this time, the API will be independent of the modeling tool used for the database.
Tuesday, October 11, 2016
Monday, October 10, 2016
Sunday, October 9, 2016
Friday, October 7, 2016
Thursday, October 6, 2016
Wednesday, October 5, 2016
Building RESTful APIs With Flask: The DIY Approach
REpresentational State Transfer (REST) is a web development architecture design style which refers to logically separating your API resources so as to enable easy access, manipulation and scaling. Reusable components are written in a way so that they can be easily managed via simple and intuitive HTTP requests which can be GET, POST, PUT, PATCH, and DELETE (there can be more, but above are the most commonly used ones).
Despite what it looks like, REST does not command a protocol or a standard. It just sets a software architectural style for writing web applications and APIs, and results in simplification of the interfaces within and outside the application. Web service APIs which are written so as to follow the REST principles, they are called RESTful APIs.
In this three-part tutorial series, I will cover different ways in which RESTful APIs can be created using Flask as a web framework. In the first part, I will cover how to create class-based REST APIs which are more like DIY (Do it yourself), i.e. implementing them all by yourself without using any third-party extensions. In the latter parts of this series, I will cover how to leverage various Flask extensions to build more effective REST APIs in an easier way.
I assume that you have a basic understanding of Flask and environment setup best practices using virtualenv to be followed while developing a Python application.
Installing Dependencies
The following packages need to installed for the application that we'll be developing.
$ pip install flask $ pip install flask-sqlalchemy
The above commands should install all the required packages that are needed for this application to work.
The Flask Application
For this tutorial, I will create a small application in which I will create a trivial model for Product. Then I will demonstrate how we can write a RESTful API for the same. Below is the structure of the application.
flask_app/
my_app/
- __init__.py
product/
- __init__.py // Empty file
- models.py
- views.py
- run.py
I won't be creating a front-end for this application as RESTful APIs endpoints can be tested directly by making HTTP calls using various other methods.
flask_app/my_app/__init__.py
from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db' db = SQLAlchemy(app) from my_app.catalog.views import catalog app.register_blueprint(catalog) db.create_all()
In the file above, the application has been configured with the initialisation of the extensions and finally creation of the database. The last statement creates a new database at the location provided against SQLALCHEMY_DATABASE_URI if a database does not already exist at that location, otherwise it loads the application with the same database.
flask_app/my_app/catalog/models.py
from my_app import db
class Product(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255))
price = db.Column(db.Float(asdecimal=True))
def __init__(self, name, price):
self.name = name
self.price = price
def __repr__(self):
return '<Product %d>' % self.id
In the file above, I have created a very trivial model for storing the name and price of a Product. This will create a table in SQLite corresponding to the details provided in the model.
flask_app/my_app/catalog/views.py
import json
from flask import request, jsonify, Blueprint, abort
from flask.views import MethodView
from my_app import db, app
from my_app.catalog.models import Product
catalog = Blueprint('catalog', __name__)
@catalog.route('/')
@catalog.route('/home')
def home():
return "Welcome to the Catalog Home."
class ProductView(MethodView):
def get(self, id=None, page=1):
if not id:
products = Product.query.paginate(page, 10).items
res = {}
for product in products:
res[product.id] = {
'name': product.name,
'price': str(product.price),
}
else:
product = Product.query.filter_by(id=id).first()
if not product:
abort(404)
res = {
'name': product.name,
'price': str(product.price),
}
return jsonify(res)
def post(self):
name = request.form.get('name')
price = request.form.get('price')
product = Product(name, price)
db.session.add(product)
db.session.commit()
return jsonify({product.id: {
'name': product.name,
'price': str(product.price),
}})
def put(self, id):
# Update the record for the provided id
# with the details provided.
return
def delete(self, id):
# Delete the record for the provided id.
return
product_view = ProductView.as_view('product_view')
app.add_url_rule(
'/product/', view_func=product_view, methods=['GET', 'POST']
)
app.add_url_rule(
'/product/<int:id>', view_func=product_view, methods=['GET']
)
The major crux of this tutorial is dealt with in the file above. Flask provides a utility called pluggable views, which allows you to create views in the form of classes instead of normally as functions. Method-based dispatching (MethodView) is an implementation of pluggable views which allows you to write methods corresponding to the HTTP methods in lower case. In the example above, I have written methods get() and post() corresponding to HTTP's GET and POST respectively.
Routing is also implemented in a different manner, in the last few lines of the above file. We can specify the methods that will be supported by any particular rule. Any other HTTP call would be met by Error 405 Method not allowed.
Running the Application
To run the application, execute the script run.py. The contents of this script are:
from my_app import app app.run(debug=True)
Now just execute from the command line:
$ python run.py
To check if the application works, fire up http://127.0.0.1:5000/ in your browser, and a simple screen with a welcome message should greet you.
Testing the RESTful API
To test this API, we can simply make HTTP calls using any of the many available methods. GET calls can be made directly via the browser. POST calls can be made using a Chrome extension like Postman or from the command line using curl, or we can use Python's requests library to do the job for us. I'll use the requests library here for demonstration purposes.
Let's make a GET call first to assure that we don't have any products created yet. As per RESTful API's design, a get call which looks something like /product/ should list all products. Then I will create a couple of products by making POST calls to /product/ with some data. Then a GET call to /product/ should list all the products created. To fetch a specific product, a GET call to /product/<product id> should do the job. Below is a sample of all the calls that can be made using this example.
$ pip install requests
$ python
>>> import requests
>>> r = requests.get('http://localhost:5000/product/')
>>> r.json()
{}
>>> r = requests.post('http://localhost:5000/product/', data={'name': 'iPhone 6s', 'price': 699})
>>> r.json()
{u'1': {u'price': u'699.0000000000', u'name': u'iPhone 6s'}}
>>> r = requests.post('http://localhost:5000/product/', data={'name': 'iPad Pro', 'price': 999})
>>> r.json()
{u'2': {u'price': u'999.0000000000', u'name': u'iPad Pro'}}
>>> r = requests.get('http://localhost:5000/product/')
>>> r.json()
{u'1': {u'price': u'699.0000000000', u'name': u'iPhone 6s'}, u'2': {u'price': u'999.0000000000', u'name': u'iPad Pro'}}
>>> r = requests.get('http://localhost:5000/product/1')
>>> r.json()
{u'price': u'699.0000000000', u'name': u'iPhone 6s'}
Conclusion
In this tutorial, you saw how to create RESTful interfaces all by yourself using Flask's pluggable views utility. This is the most flexible approach while writing REST APIs but involves much more code to be written.
There are extensions which make life a bit easier and automate the implementation of RESTful APIs to a huge extent. I will be covering these in the next couple of parts of this tutorial series.
Tuesday, October 4, 2016
Debugging With Node.js
I feel that debugging is as crucial a part of the development cycle as any other. So it's always good practice to demystify the job of debugging, making it easier and less time-consuming, so that we can end work on time and reduce stress.
Like the majority of languages out there, Node provides some excellent debugging tools which make defects in code easily found and fixed. I always advocate the usage of a debugger because personally I find using debuggers really eliminates the need for any guesswork and makes us better developers in general.
This guide is for developers and administrators that work with Node already. It presumes a fundamental understanding of the language at a practical level.
Using the Debugger
Node.js includes a full-featured out-of-process debugging utility accessible via a simple TCP-based protocol and built-in debugging client.
For example, to use the debugger to debug a file named script.js, you can simply call node using the debug flag as so:
$ node debug script.js < debugger listening on port 5858 connecting... ok debug>
Breakpoints
Now that you have started a debugging session, anywhere in your script that you call debugger from will be a breakpoint for the debugger.
So, for example, let's add a debugger statement to the script.js:
foo = 2;
setTimeout(() => {
debugger;
console.log('bugger');
}, 1000);
console.log('de');
Now if we run this script the debugger will be called on our breakpoint and we can control the script control via using the cont or next commands (c or n for short).
We can pause the script execution at any time by using p.
$ node debug script.js
< debugger listening on port 5858
connecting... ok
break in /home/tom/web/envatodebug/myscript.js:1
1 foo = 5;
2 setTimeout(() => {
3 debugger;
debug> cont
< de
break in /home/tom/web/envatodebug/myscript.js:3
1 foo = 5;
2 setTimeout(() => {
3 debugger;
4 console.log('bugger');
5 }, 1000);
debug> next
break in /home/tom/web/envatodebug/myscript.js:4
2 setTimeout(() => {
3 debugger;
4 console.log('bugger');
5 }, 1000);
6 console.log('de');
debug> next
< bugger
break in /home/tom/web/envatodebug/myscript.js:5
3 debugger;
4 console.log('bugger');
5 }, 1000);
6 console.log('de');
7
debug> quit
REPL
$ node debug script.js < debugger listening on port 5858 connecting... ok debug> repl Press Ctrl + C to leave debug repl > foo 2 > 2+2 4
The Read-Eval-Print-Loop of the debugger allows you to enter code interactively during execution and thus access the state of the application and all of its variables and methods at the point of breaking execution. This is a very powerful tool which you can use to quickly sanitize your app.
In general, the REPL is available as a standalone and as part of the debugger, and it allows you to run JavaScript interactively. For example, just type node at the prompt with no options, and you will be given a REPL interface that you can write code into and see the output.
Stepping In & Stepping Out
Earlier I mentioned the cont and next (c and n) commands, which allow us to continue code execution once a breakpoint has been reached. In addition to this, as we walk through the code we can also step in to a method or step out to its parent scope.
Use the commands step to step in and out to step out, or s and o for short.
Backtracing
Use backtrace or bt to get an output of the backtrace for the current execution frame.
Restarting
Use restart or r to restart your script from the beginning of execution.
Alternative Ways to Connect to the Debugger
Advanced users can access the debugger also by starting Node.js with the --debug command-line flag, or alternatively by signaling an existing Node.js process with SIGUSR1.
Once a process has been set into the debug mode this way, it can then be connected to by using the Node.js debugger by either using the pid of the running process or via a URI reference (e.g localhost:port) to connect the listening debugger:
node debug -p <pid>connects to the process via thepid.node debug <URI>connects to the process via the URI such aslocalhost:5858.
Using Node Inspector
In addition to the CLI debug tool, Node Inspector also provides a GUI inspector inside the web browser (currently only supporting Chrome and Opera).
To use the debugger, simply install as so:
npm install -g node-inspector
Now that we have the Node inspector installed, we can debug our script.js with:
node-debug script.js
Your shell will now output the following, and probably open the web browser to the URL if you have Chrome or Opera set as your default on your development OS.
Node Inspector is now available from http://ift.tt/1UiJ3iG Debugging `script.js` Debugger listening on port 5858
In your web browser, you will now be able to debug your application in a similar environment to the developer tools package. Setting breakpoints and viewing code is now integrated with your browser view. Enjoy!
Conclusion
Debugging doesn't need to be a nightmare, nor does it need to be stressful.
Setting breakpoints and stepping through code is so simple in Node. It's a very similar experience to Ruby, and if you are trying to understand an application you have been given, opening the app in debug mode and pausing execution is a fantastic way to learn in a rapid timeframe.
Monday, October 3, 2016
How to Use Python to Find the Zipf Distribution of a Text File
You might be wondering about the term Zipf distribution. To understand what we mean by this term, we need to define Zipf's law first. Don't worry, I'll keep everything simple.
Zipf's Law
Zipf's law simply states that given some corpus (large and structured set of texts) of natural language utterances, the occurrence of the most frequent word will be approximately twice as often as the second most frequent word, three times as the third most frequent word, four times as the fourth most frequent word, and so forth.
Let's look at an example of that. If you look into the Brown Corpus of American English, you will notice that the most frequent word is the (69,971 occurrences). If we look into the second most frequent word, that is of, we will notice that it occurs 36,411 times.
The word the accounts for around 7% of the Brown Corpus words (69,971 of slightly over 1 million words). If we come to the word of, we will notice that it accounts for around 3.6% of the corpus (around half of the). Thus, we can notice that Zipf's law applies to this situation.
Thus, Zipf's law is trying to tell us that a small number of items usually account for the bulk of activities we observe. For instance, a small number of diseases (cancer, cardiovascular diseases) account for the bulk of deaths. This also applies to words that account for the bulk of all word occurrences in literature, and many other examples in our lives.
Data Preparation
Before moving forward, let me refer you to the data we will be using to experiment with in our tutorial. Our data this time will be from the National Library of Medicine. We will be downloading what's called a MeSH (Medical Subject Heading) ASCII file, from here. In particular, d2016.bin (28 MB).
I will not go into detail in describing this file since it is beyond the scope of this tutorial, and we just need it to experiment with our code.
Building the Program
After you have downloaded the data in the above section, let's now start building our Python script that will find the Zipf's distribution of the data in d2016.bin.
The first normal step to perform is to open the file:
open_file = open('d2016.bin', 'r')
In order to carry out the necessary operations on the bin file, we need to load the file in a string variable. This can be simply achieved using the read() function, as follows:
file_to_string = open_file.read()
Since we will be looking for some pattern (i.e. words), regular expressions come into play. We will thus be making use of Python's re module.
At this point we have already read the bin file and loaded its content in a string variable. Finding the Zipf's distribution means finding the frequency of occurrence of words in the bin file. The regular expression will thus be used to locate the words in the file.
The method we will be using to make such a match is the findall() method. As mentioned in the re module documentation about findall(), the method will:
Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found. If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group. Empty matches are included in the result unless they touch the beginning of another match.
What we want to do is write a regular expression that will locate all the individual words in the text string variable. The regular expression that can perform this task is:
\b[A-Za-z][a-z]{2,10}\b
where \b is an anchor for word boundaries. In Python, this can be represented as follows:
words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', file_to_string)
This regular expression is basically telling us to find all the words that start with a letter (upper-case or lower-case) and followed by a sequence of letters which consist of at least 2 characters and no more than 9 characters. In other words, the size of the words that will be included in the output will range from 3 to 10 characters long.
We can now run a loop which aims at calculating the frequency of occurrence of each word:
for word in words:
count = frequency.get(word,0)
frequency[word] = count + 1
Here, if the word is not found yet in the list of words, instead of raising a KeyError, the default value 0 is returned. Otherwise, count is incremented by 1, representing the number of times the word has occurred in the list so far.
Finally, we will print the key-value pair of the dictionary, showing the word (key) and the number of times it appeared in the list (value):
for key, value in reversed(sorted(frequency.items(), key = itemgetter(1))):
print key, value
This part sorted(frequency.items(), key = itemgetter(1)) sorts the output by value in ascending order, that is, it shows the words from the least frequent occurrence to the most frequent occurrence. In order to list the most frequent words at the beginning, we use the reversed() method.
Putting It All Together
After going through the different building blocks of the program, let's see how it all looks together:
import re
from operator import itemgetter
frequency = {}
open_file = open('d2016.bin', 'r')
file_to_string = open_file.read()
words = re.findall(r'(\b[A-Za-z][a-z]{2,9}\b)', file_to_string)
for word in words:
count = frequency.get(word,0)
frequency[word] = count + 1
for key, value in reversed(sorted(frequency.items(), key = itemgetter(1))):
print key, value
I will show here the first ten words and their frequencies returned by the program:
the 42602 abcdef 31913 and 30699 abbcdef 27016 was 17430 see 16189 with 14380 under 13127 for 9767 abcdefv 8694
From this Zipf distribution, we can validate Zipf's law in that some words (high-frequency words) represent the bulk of words, such as we can see above the, and, was, for. This also applies to the sequences abcdef, abbcdef, and abcdefv which are highly frequent letter sequences that have some meaning particular to this file.
Conclusion
In this tutorial, we have seen how Python makes it easy to work with statistical concepts such as Zipf's law. Python comes in very handy in particular when working with large text files, which would require a lot of time and effort if we were to find Zipf's distribution manually. As we saw, we were able to quickly load, parse, and find the Zipf's distribution of a file of size 28 MB. Let alone the simplicity in sorting the output thanks to Python's dictionaries.
CSS Grid Layout: Going Responsive
Throughout this series we’ve become familiar with Grid syntax, learned about some efficient ways of laying out elements on a page, and said goodbye to some old habits. In this tutorial we’re going to apply all of that to some practical responsive web design.
Media Queries
Let’s use the demo from where we left off last time.
It comprises two declared grids; our main grid and the nested grid within one of our items. We can control when these grids come into effect using media queries, meaning we can completely redefine our layout at different viewport widths.
Begin by duplicating the first grid declaration, and wrapping the duplicate in a mobile-first media query (I’m using 500px as the breakpoint, but that’s completely arbitrary):
.grid-1 {
/* grid declaration styles */
}
@media only screen and (min-width: 500px) {
.grid-1 {
/* grid declaration styles */
}
}
Now, within the first declaration we’ll change how our grid is defined, placing the whole thing in a single column. We’ll list just one column in our grid-template-columns rule, make sure the four rows we now have are defined with grid-template-rows, and arrange the layout with grid-template-areas:
.grid-1 {
display: grid;
width: 100%;
margin: 0 auto;
grid-template-columns: 1fr;
grid-template-rows: 80px auto auto 80px;
grid-gap: 10px;
grid-template-areas: "header"
"main"
"sidebar"
"footer";
}
We’ve also made our grid-gap just 10px by default, to account for smaller screens.
Here’s what that gives us. You’ll notice that we’re also using our media query to change the padding and font-size on our .grid-1 div items.
Our Nested Grid
That takes care of the main layout, but we still have the nested grid which remains stubbornly in its two column layout under all circumstances. To fix that we’ll do exactly the same as before, but using a different breakpoint to suggest a content-first approach:
.item-2 {
/* grid declaration styles */
}
@media only screen and (min-width: 600px) {
.item-2 {
/* grid declaration styles */
}
}
Check out the end result on CodePen.
A couple of things to note here:
- As we’ve said before, you can visually arrange grid items irrespective of the source order, and media queries mean we can have different visual orders for different screen widths. However, nesting has to remain true to the source; our nested grid items must always be visually and actually descendants of their parent.
- CSS transitions don’t have any influence over Grid layout. When our media queries kick in, and our grid areas move to their new positions, you won’t be able to ease them into place.
auto-fill and minmax()
Another (sort of) responsive approach to Grid is well suited to masonry-type layouts; blocks which size and flow automatically, depending on the viewport.
auto-fill
Our approach up until now has been to dictate how many tracks there are and watch the items fit accordingly. That’s what is happening in this demo; we have grid-template-columns: repeat(4, 1fr); which says “create four columns, and make each one a single fraction unit wide”.
With the auto-fill keyword we can dictate how wide our tracks are and let Grid figure out how many will fit in the available space. In this demo we’ve used grid-template-columns: repeat(auto-fill, 9em); which says “make the columns 9em wide each, and fit as many as you can into the grid container”.
Note: this also takes our gutters, the grid-gap, into account.
The container in these demos has a dark background to show clearly how much space is available, and you’ll see that it hasn’t been completely filled in the last example. So how do we do that?
minmax()
The minmax() function allows us to set a minimum and a maximum size for a track, enabling Grid to work within them. For example we could setup three columns, the first two being 1fr wide, the last being a maximum of 1fr, but shrinking no smaller than 160px:
grid-template-columns: 1fr 1fr minmax(160px, 1fr);
All the columns will shrink as you squish the window, but the last column will only be pushed so far. Take a look.
Back to our auto-fill demo, if we were to change our column width for minmax(9em, 1fr) Grid would place as many 9em tracks as possible, but then expand them to a maximum of 1fr until the container is filled:
Caveat: Grid will recalculate the tracks upon page reload (try squishing the browser window and hitting refresh) but it won’t do so on window resize. Media queries can be used to alter the values, but they still won’t play nice with window resize.
Conclusion
Let’s wrap up with some bullets:
- Media queries can help us completely rearrange Grid layouts by redefining
grid-template-areas(and other things) for different scenarios. - CSS transitions don’t have any effect on changes made to the grid layout.
- The
auto-fillkeyword is useful for filling up grid containers. - The
minmax()function complementsauto-fillnicely, making sure containers are properly filled, but doesn’t give us “responsiveness” in the true sense of the word.
With the lessons learned in this series, you’re armed to go out and start playing with Grid! Stay tuned for more Grid tutorials, practical exercises, solutions to common layout problems, and updates.
Useful Resources
- Rachel Andrew’s Grid by Example 29: minmax() and spanning columns and rows
- Video: Rachel Andrew (obviously) demonstrating minmax() on the Tuts+ homepage layout
- W3C Editor’s Draft: auto-fill
- W3C Editor’s Draft: minmax()