Feed aggregator

EFF, CloudFlare Ask Federal Court Not To Force Internet Companies To Enforce Music Labels’ Trademarks

Cloudflare Blog -

This blog was originally posted by the Electronic Frontier Foundation who is represents CloudFlare in this case.

JUNE 18, 2015 | BY MITCH STOLTZ

This month, CloudFlare and EFF pushed back against major music labels’ latest strategy to force Internet infrastructure companies like CloudFlare to become trademark and copyright enforcers, by challenging a broad court order that the labels obtained in secret. Unfortunately, the court denied CloudFlare’s challenge and ruled that the secretly-obtained order applied to CloudFlare. This decision, and the strategy that led to it, present a serious problem for Internet infrastructure companies of all sorts, and for Internet users, because they lay out a blueprint for quick, easy, potentially long-lasting censorship of expressive websites with little or no court review. The fight’s not over for CloudFlare, though. Yesterday, CloudFlare filed a motion with the federal court in Manhattan, asking Judge Alison J. Nathan to modify the order and put the responsibility of identifying infringing domain names back on the music labels.

We’ve reported recently about major entertainment companies’ quest to make websites disappear from the Internet at their say-so. The Internet blacklist bills SOPA and PIPA were part of that strategy, along with the Department of Homeland Security’s project of seizing websites based on unverified accusations of copyright infringement by entertainment companies. Entertainment distributors are also lobbying ICANN, the nonprofit organization that oversees the Internet’s domain name system, to gain the power to censor and de-anonymize Internet users without court review. (Fortunately, it looks like ICANN is pushing back)

The order that CloudFlare received in May is an example of another facet of the site-blocking strategy. It works like this: entertainment companies file a lawsuit in federal court against the (possibly anonymous) owners of a website they want to make disappear. The website owners typically don’t show up in court to defend themselves, so the court issues an order to the website to stop infringing some copyright or trademark. (Often this order is initially drafted by the entertainment companies.) The entertainment companies then send a copy of the order to service providers like domain name registrars, Web hosting providers, ISPs, and content delivery networks like CloudFlare, and demand that the service providers block the targeted website or domain name, as well as other websites and domains that the entertainment companies want gone.

This month’s case involved a website that called itself Grooveshark, and appeared to be a clone of the site by that name that shut down in April after settling a copyright lawsuit to the record labels. That settlement left the labels in control of Grooveshark’s trademarks, which they proceeded to use as a weapon against the copycat site. The labels applied to the U.S. District Court for the Southern District of New York for a secret order to shut down the site, which was then located at grooveshark.io. Judge Deborah A. Batts granted the order in secret. The order covered the site’s anonymous owners, and anyone “in active concert or participation” with them. The order also listed “domain name registrars . . . and Internet service providers” among those who have to comply with it. The labels sent a copy of the order to several companies, including CloudFlare.

When a federal court issues an order (called an injunction), court rules say it can apply to a party in the case or to anyone in “active concert or participation” with them. But the courts haven’t clarified what “active concert or participation” means in the Internet context. Communication over the Internet can involve dozens of service and infrastructure providers, from hosts to domain name registrars to ISPs, backbone providers, network exchanges, and CDN services. Under a broad reading, even an electric utility or a landlord that leases space for equipment could conceivably be in “active concert or participation” with a website.

CloudFlare decided to take a stand against the overbroad order, asking the court to clarify that the order did not apply to CloudFlare. As a CDN and reverse proxy service, CloudFlare makes websites faster and more secure, but can’t suspend a site’s domain name or render it unreachable. So even if making Internet intermediaries responsible for enforcing copyright and trademark laws was a good idea (it’s not), CloudFlare is not the right one to do it. And even if the mysterious owners of the “new Grooveshark” site are bad actors, CloudFlare wanted to protect its law-abiding customers by insisting on a correct and thorough court process before cutting off any customer.

Unfortunately, the court concluded that the initial order applied to CloudFlare. And even worse, the court said that CloudFlare has to block every user with a domain name that contains “grooveshark,” no matter who owns the site. That means that CloudFlare, or any Internet infrastructure company that gets served with a copy of the court order, would have to filter and ban sites called “groovesharknews,” “grooveshark-commentary,” or “grooveshark-sucks,” no matter who runs them or what they contain.

That’s a big deal. Laws like Section 512 of the Digital Millennium Copyright Act, Section 230 of the Communications Decency Act, and court decisions on trademark law, protect Internet intermediaries from legal responsibility for the actions of their users, including the responsibility to proactively block or filter users. That protection has been vital to the growth of the Internet as a medium for communication, innovation, and learning. Those laws help keep Internet companies and entertainment conglomerates from becoming the gatekeepers of speech with the power to decide what we can and can’t communicate. The record labels didn’t accuse CloudFlare of any copyright or trademark violation—nor could they, in part because of laws like the DMCA. Yet the order against CloudFlare might force CloudFlare, and other service providers, to filter its service for terms like “grooveshark”—and other words that might appear in future court orders. Service providers like CloudFlare could find themselves in the uncomfortable position of having to figure out who’s allowed to use “grooveshark” and who isn’t—or of having to block them all. Turning Internet companies into enforcers of who can say what on the Internet is exactly what laws like the DMCA were meant to avoid.

And CloudFlare is far from the only Internet company to be hit with an order like this. Many, including some domain name registrars, simply comply with overbroad court orders, asking no questions, instead of sticking up for their users.

Yesterday, CloudFlare took a new step. Represented by EFF and Goodwin Procter, CloudFlare asked the court to change the order so that in the future, CloudFlare will only be responsible for taking down user accounts that use variations on “grooveshark” if the music labels notify CloudFlare that a site is infringing. That change will put the job of enforcing trademarks back on the trademark holders, and preserve the balance created by laws like the DMCA. CloudFlare supports smart, effective, and careful trademark enforcement and wants to see it done right – not through broad orders that can impact free speech.

Other Internet companies that care about their users, and would rather not become unwilling trademark and copyright enforcers and arbiters of speech, should follow CloudFlare’s lead and push back against orders like this one.

Project Management

Lullabot -

In this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.

Project Management

Drupal Fire -

Lullabot (via DrupalFire)

In this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.

Lullabot: Project Management

Planet Drupal -

In this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.

Drupal Association News: My Week at DrupalCon, part 2

Planet Drupal -

Part 1 of My Week at DrupalCon

Part 2:

As our community grows, so do our programs.  This year in addition to hosting trainings and both the Community Summit and Business Summit, we offered a Higher-Ed Summit at DrupalCon.  As soon as it was announced folks clamored to sign up, and the tickets sold out at a rapid pace.  We at the Drupal Association feel like this is a great example of how the growing variety of offerings at DrupalCon illustrates the increasing diversity of our community’s interests and skillsets.

The Higher-Ed Summit was a huge hit and that was due largely in part to the efforts of the Summit Leads, Christina and Shawn.  They worked hard to understand what the Higher-Ed community wanted and needed from the Summit and strategized to provide it down to the last detail.  Their planning and experience were integral to the popularity of the event, and we look forward to working with these awesome volunteers again in the future.    

Maybe I’m naive or a wide-eyed optimist, but meeting and speaking to people from all over the world is invigorating and exciting to me. Throughout the course of DrupalCon I had the opportunity to meet with community organizers from near and far. While it’s true that many attendees came from the United States and Canada, there were also organizers who came from as far away as Latin America, Europe, India, and Japan, and talked about how Drupal has affected their communities and their livelihoods.  It is always such a pleasure to see Drupal changing lives and bringing opportunities for personal growth and business everywhere.  

After an exhausting week of keynotes, and BOFs, and meetings, and dinners, I launched into the sprints on Friday with the purpose of understanding Drupal more.  I always enjoy discussing Drupal’s unique qualities with developers, site-builders, and themers, but this DrupalCon I really wanted to engage in more than just conversations.  I wanted to experience what it is like to directly develop and work with Drupal.  At the Friday sprints, my friend and new mentor Amy agreed to sit down with me and help me put together my own blog, run on a Drupal website.  During the process, I realized that there is no better way to start to understand the complexity of Drupal than to use the product myself.  

When learning to use Drupal in the sprint, I realized that we really are about fostering a friendly, inclusive, and diverse community. We talk the talk and we walk the walk.  Amy sat down with me and patiently showed me step-by-step how to start my site.  We picked a hosting site, domain name, downloaded Drupal, and began the process of organizing our modules and features. Finally, I started to really get it, which was incredibly exciting. Both personally and professionally, it meant a lot to me that someone would take the time to help me on my journey. It really brought home the fact that Drupalers genuinely care, are excited and willing to share knowledge, and have fun while doing it.  

DrupalCon Los Angeles was a spectacular event.  I feel like this blog wouldn’t be a proper message from LShey without some shout-outs and kudos, so please join me in celebrating others. I’d like to say out a big thank you to our talented Events team at the Drupal Association for organizing a seamless and beautiful event.  Thank you to our sponsors who help us put on this event with their support.  Thank you to our dedicated volunteers: whether you were a sprint-mentor, room-monitor, or speaker, your time and expertise is appreciated and valued.  Our volunteers truly make DrupalCon a wonderful event.  I’d like to share a special shout-out to the team who keeps us all informed, too: thank you to Alex and Paul for running the @drupalconna twitter handle.  Thank you to Emma Jane, who was our MC this DrupalCon, and who engaged our keynote speakers with witty and thoughtful interviews.  Lastly, thank you to you all, our community.  DrupalCon would not be the same without you.  I’m looking forward to seeing you all at the next one!  

Drupal on, 

Lauren Shey
Community Outreach Coordinator
Drupal Association
@lsheydrupal

Acquia: How Weather.com Improved Their Page Load Times

Planet Drupal -

In November, 2014 Weather.com launched on Drupal and became one of the highest trafficked websites in the world to launch on an open-source content management system (CMS). Mediacurrent and Acquia are excited to announce a new, 3-part blog post series that will share insight around how Weather.com was migrated to Drupal. Our team of experts will share best practices and what lessons we learned during the project.

There's an old saying, “Everyone talks about the weather, but nobody does anything about it.” While we are a long way from controlling the weather, Weather.com has done a spectacular job of delivering accurate weather news, as rapidly as possible, to all kinds of devices.

This is a small miracle, especially when you consider Weather.com served up a billion requests during its busiest week. Even slow weeks require delivering hundreds of dynamic maps and streaming video to at least 30 million unique users in over three million forecast locations. The site has to remain stable with instantaneous page loads and 100 percent uptime, despite traffic bumps of up to 300 percent during bad weather.

Page load times are the key to their business and their growth. When The Weather Channel's legacy CMS showed signs of strain, they came to Drupal.

On their legacy platform, Weather.com was tethered to a 50 percent cache efficiency. Their app servers were taking on far too much of the work. The legacy platform ran on 144 origin servers across three data centers. It takes all that muscle to keep up with the number of changes that are constantly happening across the site.

Traditionally, when you have a highly trafficked site, you put a content delivery network (CDN) in front of it and call it a day. The very first time a page is requested, the CDN fetches it from the origin server and then caches it to serve to all future requestors.

Unfortunately, it doesn't work that way for a site like Weather.com.

Consider this: If a user in Austin visits a forecast page, they see a certain version of that page. A visitor from Houston sees a slightly different version of that page. Not only are there two different versions of the page, one for each location, but much of the information on the page is only valid for about five minutes.

At the scale of three million locations, that's a lot of pages that have to rebuild on an ongoing basis only to be cached for 5 minutes each. Couple this with the fact that the number of served locations kept increasing as developers worked on the site, and you can see that things are rapidly getting out of control.

The first thing we did was break up the page into pieces that have longer or shorter life spans based on the time-sensitivity of the content. That allowed us to identify the parts of the pages that were able to live longest and that we could serve to the majority of users. The parts that varied, we no longer change on the origin servers, but instead delegate to systems closer to the user where they actually vary.

To accomplish that trick, we switched to a service-oriented architecture and client side rendering, using Angular.js, ESI (Edge Side Includes), and some Drupal magic. The combination of these three components boosted cache efficiency, page performance, and reduced the required number of servers to deliver it.

The result? After launch, we showed Weather.com a 90 percent cache efficiency. In other words, in going from 50 to 90% cache efficiency they reduced the number of hits to the origin servers, which means that you need fewer of them. Post launch, we were able to increase cache efficiency even further.

This cache efficiency was also measured only at the edge. Varnish (a caching proxy) further reduced the amount of traffic, meaning that Drupal itself and the Varnish stack were serving less than 4 percent of their requested traffic. The benefits of the service-oriented architecture also mean that scaling is simpler, architectural changes are less painful, and the end user can experience a richer user experience.

Doing something about the weather is still way out on the horizon, but Weather.com can certainly claim that it has improved the delivery of weather news.

Tags:  acquia drupal planet

Lullabot: Drupal 8 Theming Fundamentals, Part 2

Planet Drupal -

In our last post on Drupal 8 theming fundamentals, we learned to set up a theme and add our CSS and JavaScript. This time around we’re talking about the Twig templating engine, how to add regions to our theme, and then finish with a look at the wonderful debugging available in Drupal 8.

Drupal 8 Theming Fundamentals, Part 2

Lullabot -

In our last post on Drupal 8 theming fundamentals, we learned to set up a theme and add our CSS and JavaScript. This time around we’re talking about the Twig templating engine, how to add regions to our theme, and then finish with a look at the wonderful debugging available in Drupal 8.

qed42.com: Upcasting menu parameters in Drupal 8

Planet Drupal -

Menu upcasting means converting a menu argument to anything. It can be an object or an array. In this article, we will look at how it used to be done in Drupal 7 codebase & how should we port this into Drupal 8 codebase.
Lets take an example of the following code in Drupal 7: function my_module_menu() { $items['node/%my_menu/mytab'] = array( // ... // ... ); }

Drupal Watchdog: Small Sites, Big Drupal

Planet Drupal -

Article

In a much-analyzed 2013 interview with Computerworld, Drupal founder and “benevolent dictator” Dries Buytaert laid out a future path for the software focused squarely on enterprise clients (see also “Will the Revolution be Drupalized?”). While small sites had their place, Buytaert asserted, “I think we just need to say we’re more about big sites.” With Drupal 8, he concluded, “I really think we can say we’ve built the best CMS for enterprise systems.”[1]

Where does this bright future leave the smaller sites that up till now have formed the mainstay of Drupal adopters?

What’s in the Pipe

Drupal 8 is not all bad news for smaller sites; there are many new features and enhancements that should lower or eliminate some previous barriers.

  • More in core Many areas of key functionality that previously required downloading, installing, and configuring modules and other dependencies now will work out of the box. Case in point: WYSIWYG editing.
  • UI improvements A lot of customization that previously required specialized modules or custom code is now exposed via the core admin interface.

That said, there are signs of trouble ahead:

Hosting Barriers

Drupal 7 performance already pushed the limits of the typical, inexpensive, shared hosting that most small sites rely on. And Drupal 8? Watch out. It has what Drupal 8 maintainer Nathaniel Catchpole frankly called “an embarrassingly high memory requirement.”[2] Yes, memory issues can be addressed through solutions like reverse proxy caching or pushing search indexing to Solr. But those options are precisely the ones that are missing from the vast majority of shared hosts.

DIYers Beware

Small Drupal sites have benefited from the ease of dabbling in Drupal development. Drupal 8, in contrast, has been rewritten from the ground up with professional programmers in mind. Dependency injection, anyone?

Phase2: Developer Soft Skills Part 1: Online Research

Planet Drupal -

Developer Soft Skills

One of my earliest jobs was customer service for a call center. I worked for many clients that all had training specific to their service. No matter the type of training, whether technical or customer oriented, soft skills were always a included. Margaret Rouse said, “Soft skills are personal attributes that enhance an individual’s interactions, career prospects and job performance. Unlike hard skills, which tend to be specific to a certain type of task or activity, soft skills are broadly applicable.”

In this blog series I will be discussing what I call “developer soft skills.” The hard skills in development are (among others) logic, languages, and structure. Developer soft skills are those that help a developer accomplish their tasks outside of that knowledge. I will be covering the following topics:

  • Online research
  • Troubleshooting
  • Enhancing/Customizing
  • Integrating
  • Architecting
Part 1: Online Research

One of the first skills a developer should master is online researching. This is an area with some controversy (which will be discussed later) but a necessary skill for learning about new technologies, expanding your knowledge, and solving problems.

One of the best reasons for research is continuous education. For many professions (such as the military, education and medical fields) continuing education is required to keep up on updated information, concepts, and procedures. As a developer, continuing to grow our skill set helps us develop better projects by using better code, better tools, and better methods.

Search engine queries

When researching a topic on the internet it usually involves using a search engine. Understanding how a search engine works and how to get to the results.There are two parts to how a search engine works. Part one is data collection and indexing. Part two is searching or querying that index. I will be focusing on how to write the best possible query, to learn more about how search collect and index data see this link. In order to write good queries we should understand how search engines respond to what we type into the search box. Early search results were rendered based on simple (by today’s standards) comparison of search terms to indexed page word usage and boolean logic. Since then search engines have started to use natural language queries.

So we can get better results by using this to our advantage. If I wanted to research how to make a calendar with the Java programming language. instead of searching for keywords and distinct ideas “java -script calendar” by them selves; use natural language to include phraseology and context in our queries: “how can I make a calendar with java”. The first result from the keyword search returns a reference to the Java Calendar class. The first result from the second query return example code on writing a calendar in Java. The better the query the better the results.

Search result inspection

Once we have the right query we can then turn our attention to the results. One of the first things I do is limit the results to a date range. This prevents results from the previous decade (or earlier) to be displayed with more recent and applicable ones. Another way to focus our search is to limit the site that the search takes place on. If we know we want to search for a jQuery function search jquery.com.

Once we have filtered our results, it’s time for further inspection. When viewing a results page, the first thing I look for is the context of the article or post. Does the author and/or site have a lot of ads? This can sometimes mean that the site is more about making money then providing good answers. Does the page have links or other references to related topic or ideas? This can show if the author is knowledgeable in the subject matter.

The controversy

Earlier I mentioned online researching can be a controversial topic. One of the points of controversy is discussed in Scott Hanselman’s blog post, Am I really a developer or just a good googler? While I agree with his major point, that researching bad code can be dangerous, I contend that using a search engine can produce good results and learning opportunities.

Almost anytime you search for any programming topic, one site or group of sites is predominant in almost every result: Stack Overflow or the Stack Exchange group of sites. Several articles have been written about reasons not to use, consequence of using and why some developers no longer use Stack Overflow. Using Stack Overflow will not solve all your problems or make you a better developer.

Again, these arguments make some good points. But I think that using Stack Overflow correctly, just like good use of search engines, can produce good results. Using a Stack Exchange site comes with the benefit of community. These sites have leveraged Stack Exchange Q&A methodology for their specific topic or technology and can be a great resource on how to solve a problem within the bounds of that community. One of my development mentors told me that there were thousands of ways to solve a programming problem and usually several wrong ones. The key is to not do one of the wrong ones and try to find one of the best ones. Searching within a Stack exchange site for answers can highlight the wrong ones but also provide the ones that work best in that system.

Here is an example of a Stack Overflow Drupal community response that came up when I searched for: “drupal create term programmatically.”

This response is correct, but if you look at the link provided, you will see this is for Drupal 6. If you were looking for how to do this in Drupal 7, for instance, the answer provided would not be correct. We could have improved our results by adding “Drupal 7″ to our query. But most important is to keep in mind that sites like Stack Overflow, or other community sites such as php.net include a mix of user generated responses. Meaning anyone can respond without being vetted.

Keep going

The best piece of advice I can offer for the arguments against using online search results and Stack Overflow is: “This is not the end.” Keep going past the result and research the answer. Don’t just copy and paste the code. Don’t just believe the top rated answer or blog post. Click the references sited, search the function or api calls that are in the answer, and make the research a part of your knowledge. And then give back by writing about your article or posting your own answers. Answering questions can sometimes be just as powerful a learning tool as searching for them.

In the end, anything you find through search, blog, and code sites should be considered a suggestion as one way of solving a problem – not necessarily the solution to your concern.

In the next post I will discuss a good use case for Stack Exchange sites, Developer Soft Skills Part 2: Troubleshooting.

Subscribe to our newsletter to keep up with new projects and blogs from the Phase2 team!

Palantir: D8FTW: The Drupal 8 Tour

Planet Drupal -

Drupal 8 is expected out this fall sometime (good lord willin' and the crick don't rise, as my mother used to say). It's a big change, but a long-needed one. It's also one that the whole PHP community is looking forward to, if what I've seen at conferences over the last few years is anything to go by.

One of my foci this year has been to help the Drupal and PHP communities get ready for Drupal 8. That's why I've been submitting Drupal 8-centric sessions to conferences across the country and around the world, and why conferences keep asking for them! (Tip for people who want to submit sessions to PHP conferences...)

Without intending to, I have basically kicked off my own Drupal 8 World Tour!

Where has the tour been so far in 2015?

(I was also at Midcamp here in Chicago, but of all places they didn't want a Drupal 8 talk from me!)

If you haven't caught the tour yet, it may be coming to a town near you soon. My travel schedule for the rest of the year is pretty booked as well. Join me at any of the following events (which should be great events in their own right) to get a Crash Course in Drupal 8 or more!

And stay tuned for a few other possibilities in October...

Let's get together and learn about Drupal 8!

Blink Reaction: Building Native Apps - Part 3

Planet Drupal -

 

Building native mobile apps with Ionic Framework and Drupal back-end: configure Drupal REST server

 

Today, we continue to build our mobile app with Ionic Framework. Acquia has a service called Acquia Cloud Free which provides a development and staging environment for free. With this, you can create test servers with Drupal, and install drush with a few mouse clicks. Also, you get a git repository from which your site which will build a lot more features. I have created a simple blog website with some dummy content by a Devel module - you can check it out here.

 

 

Required modules

 

To create a REST server for this blog we should use a few contrib modules:

 

Services and Views Datasource. So, let’s install them and enable the following: “Services,” “REST Server” and “Views JSON modules.” Next, you should go to /admin/structure/services and click on Add link on the top of page. You can download and enable the CORS module to have the ability to test mobile apps in your browser before compilation. Here is the example of CORS configuration that will allow you to request data from all /api urls and retrieve the user session token from /services/session/token (which we should use to make user authorizations in our app). You can remove this module with settings after testing for security reasons.

 

 

REST server setup

 

Next, you must configure your new REST server and its output. Set the machine name of our server, select REST type, and create base path for all resources as /api. You should also enable Session authentication, to make it later in the app.

 

 

We must select the “json" esponse formatter, and the “application/json” and “application/x-www-form-urlencoded” request parsers so the app will send and receive all data in json format.

 

 

We won’t enable any resources for now. We should create views to catch data that we will use in our app. We do it this way to receive only the fields that we need, and to have a flexible configuration on each field output.

 

Adding views resources

 

Next, we’ll add a new view of a type page, with JSON data document format and a path of “api/articles.” We should also set the output limit to 10 articles, and check “Use a pager.” This will give us an articles list with pagination, showing 10 articles per page. In our application we will create functionality in an articles controller to load more so they can be retrieved with a page parameter in the request.

 

 

JSON output settings prettify output data. We removed the Root object name and Top-level child object to get an unnamed json array of objects. We can configure each field output by adding a label for it; for example we changed image_field to image.

 

 

We added title, nid and image fields to this view. This data will be used on the articles tab in our application.

 

 

Now if we visit the /api/articles page, we should get json data of the first 10 articles. We can also add a page GET parameter.

 

In the same way, we create a view page with path /api/article/% and a contextual filter to get a single article by nid. Here we add nid, title, image, body and comment_count fields. This will be used on an article details page.

 

 

We also need a view to get a non-empty categories list (/api/categories). We then create a relationship Taxonomy term: Content with a term to count how many articles exist in each category, and a filter to show only categories that contain one or more articles. We will use this data to make a categories tab in the hybrid app.

 

 

Single categories by tid (/api/category/%) and a list of articles in this category display by relationship taxonomy term: content with term, with pagination. We also add fields of category tid and name, article nid, title and image (article fields and format are the same as in the articles view), so we should show all articles related to the current category in the mobile application.

 

 

Now we have a REST server from which we can get articles and categories of a blog, as json data that we can request and use from our app. In the next part, we should configure our application services. This tutorial continues tomorrow so be sure to check back in.

DrupalBest PracticesDrupal PlanetDrupal TrainingLearning SeriesPost tags: AppsIonic

Go has a debugger—and it's awesome!

Cloudflare Blog -

Something that often, uh... bugs1 Go developers is the lack of a proper debugger. Sure, builds are ridiculously fast and easy, and println(hex.Dump(b)) is your friend, but sometimes it would be nice to just set a breakpoint and step through that endless if chain or print a bunch of values without recompiling ten times.

CC BY 2.0 image by Carl Milner

You could try to use some dirty gdb hacks that will work if you built your binary with a certain linker and ran it on some architectures when the moon was in a waxing crescent phase, but let's be honest, it isn't an enjoyable experience.

Well, worry no more! godebug is here!

godebug is an awesome cross-platform debugger created by the Mailgun team. You can read their introduction for some under-the-hood details, but here's the cool bit: instead of wrestling with half a dozen different ptrace interfaces that would not be portable, godebug rewrites your source code and injects function calls like godebug.Line on every line, godebug.Declare at every variable declaration, and godebug.SetTrace for breakpoints (i.e. wherever you type _ = "breakpoint").

I find this solution brilliant. What you get out of it is a (possibly cross-compiled) debug-enabled binary that you can drop on a staging server just like you would with a regular binary. When a breakpoint is reached, the program will stop inline and wait for you on stdin. It's the single-binary, zero-dependencies philosophy of Go that we love applied to debugging. Builds everywhere, runs everywhere, with no need for tools or permissions on the server. It even compiles to JavaScript with gopherjs (check out the Mailgun post above—show-offs ;) ).

You might ask, "But does it get a decent runtime speed or work with big applications?" Well, the other day I was seeing RRDNS—our in-house Go DNS server—hit a weird branch, so I placed a breakpoint a couple lines above the if in question, recompiled the whole of RRDNS with godebug instrumentation, dropped the binary on a staging server, and replayed some DNS traffic.

filippo@staging:~$ ./rrdns -config config.json -> _ = "breakpoint" (godebug) l q := r.Query.Question[0] --> _ = "breakpoint" if !isQtypeSupported(q.Qtype) { return (godebug) n -> if !isQtypeSupported(q.Qtype) { (godebug) q dns.Question{Name:"filippo.io.", Qtype:0x1, Qclass:0x1} (godebug) c

Boom. The request and the debug log paused (make sure to kill any timeout you have in your tools), waiting for me to step through the code.

Sold yet? Here's how you use it: simply run godebug {build|run|test} instead of go {build|run|test}. We adapted godebug to resemble the go tool as much as possible. Remember to use -instrument if you want to be able to step into packages that are not main.

For example, here is part of the RRDNS Makefile:

bin/rrdns: ifdef GODEBUG GOPATH="${PWD}" go install github.com/mailgun/godebug GOPATH="${PWD}" ./bin/godebug build -instrument "${GODEBUG}" -o bin/rrdns rrdns else GOPATH="${PWD}" go install rrdns endif test: ifdef GODEBUG GOPATH="${PWD}" go install github.com/mailgun/godebug GOPATH="${PWD}" ./bin/godebug test -instrument "${GODEBUG}" rrdns/... else GOPATH="${PWD}" go test rrdns/... endif

Debugging is just a make bin/rrdns GODEBUG=rrdns/... away.

This tool is still young, but in my experience, perfectly functional. The UX could use some love if you can spare some time (as you can see above it's pretty spartan), but it should be easy to build on what's there already.

About source rewriting

Before closing, I'd like to say a few words about the technique of source rewriting in general. It powers many different Go tools, like test coverage, fuzzing and, indeed, debugging. It's made possible primarily by Go’s blazing-fast compiles, and it enables amazing cross-platform tools to be built easily.

However, since it's such a handy and powerful pattern, I feel like there should be a standard way to apply it in the context of the build process. After all, all the source rewriting tools need to implement a subset of the following features:

  • Wrap the main function
  • Conditionally rewrite source files
  • Keep global state

Why should every tool have to reinvent all the boilerplate to copy the source files, rewrite the source, make sure stale objects are not used, build the right packages, run the right tests, and interpret the CLI..? Basically, all of godebug/cmd.go. And what about gb, for example?

I think we need a framework for Go source code rewriting tools. (Spoiler, spoiler, ...)

If you’re interested in working on Go servers at scale and developing tools to do it better, remember we’re hiring in London, San Francisco, and Singapore!

  1. I'm sorry.

Issue 194

The Weekly Drop -

Issue 194 - June, 18th 2015 Started a cross country road trip this week so this issue is a bit shorter than normal. Enjoy. From Our Sponsor Webchick Tells All: The Ultimate Guide to Drupal 8

If you haven’t grokked all that Drupal 8 has to offer yet, check out Angie “webchick” Byron’s ebook The Ultimate Guide to Drupal 8. Get inside Views on Core, or dig in and learn how Twig will make theming a breeze. Webchick has you covered with all the info you’ll need to be successful with D8.

Articles What's your opinion on "premium Drupal modules"?

As someone that isn't plugged into the Wordpress world this was interesting to me. You might find this post and the comments from Ryan Szrama interesting.

Drupal security: issues, modules, updates, checklist

A nice list of security advice and resouces.

Migrating Weather.com To Drupal: Increased Content Portability Contrib Committee Status Review for May, 2015

Mediacurrent's Damien McKenna summarizes the contrib work done my his collegues during the month of May.

Drupal 8 Write a Migrate Process Plugin, Learn Drupal 8 Tutorials Add unit testing to legacy code Multiple Editors per Node in Drupal 7

Daniel Sipos shows us how to add functionality to your stock Drupal nodes on the Sitepoint.com blog.

Projects 15 minutes to your first Drupal integration test

In this post on the Red Crackle blog they share how to use their Drupal integration framework Red Test.

Easily Apply Drupal Patches with Patch Manager

OSTraining's Steve Burge introduces non-coders to the Patch manager module.

Releases advagg 7.x-2.11 cdn 7.x-2.7-beta1 honeypot 7.x-1.18 honeypot 8.x-1.18-beta6 mollom 7.x-2.14 openlucius 7.x-1.0-rc1 panels 8.x-3.0-alpha11 personalize 7.x-1.0-rc13 Podcasts Hacking Culture 7 : Holly Ross on the Drupal Association Junio 2015 - DrupalCon Los Angeles 2015 - Drupodcast Organize and Manage Your Drupal Projects Using Dropfort with Mathew Winstone - Modules Unraveled Podcast Real world change with PHP and community: "The sky's the limit." - Acquia Podcast Talking Drupal #095 - Easy Does It Events D8 Accelerate critical issue sprint

July 2th - 8th in London, UK.

News DrupalCI: It's coming!

"DrupalCI is the next-generation version of our beloved testbot. The MVP ("minimum viable product") is coming soon (rolled out in parallel with the old testbot for awhile)."

Jobs List Your Job on Drupal Jobs

Wanna get the word out about your great Drupal job? Get your job in front of hundreds of Drupal job seekers every day at Jobs.Drupal.Org.

Featured Jobs Senior Web Developer

Pantheon San Francisco/CA/US

Sr. Drupal Developer

Professional Recruiting Services [on behalf of client] US

Junior Drupal Developer

Cancer Research UK London/GB


InternetDevels: Drupal vs Wordpress: functionality vs simplicity

Planet Drupal -

Drupal and WordPress are very popular content management systems for website development. They are both built on PHP + MySQL, they are both free, they are almost of the same age (Drupal was “born” in 2001 and Wordpress in 2003). However, they have significant differences in the ease of use, functionality, flexibility and more. Accordingly, each CMS has a passionate army of its own fans.

Rivals in different categories

Read more

Mediacurrent: Video: Metatag 1.5

Planet Drupal -

With meta tags still being important for search engine optimization and improving how content looks when shared via social networks like Facebook and Twitter, it's worth learning how to use Drupal's Metatag module the right way. Let me show you how a site can benefit from using meta tags, then find out about the latest improvements, along with recommendations on how to get it set up correctly for your site.

Pages

Subscribe to Cruiskeen Consulting LLC aggregator