Monkey Business

By Peng Xie

Let the monkeys out!

monkey_business_by_d_mac

“Monkeys?” 

Of course we’re talking about monkey tests here… There are actually two types of monkeys — the smart monkeys and the dumb monkeys.

A smart monkey, knows the application it is testing, and follows a certain set of rules when it comes to doing its job. A dumb monkey, like its name suggests, knows pretty much nothing and behaves more randomly. One can think that a monkey in real life is probably somewhere between our smart monkey and dumb monkey.

There are advantages and disadvantages for both types of monkeys. A smart monkey, while being more effective to find bugs, does require more effort to “train”. The developer needs to create the said knowledges and rules for the smart monkey before it can start the tests. A dumb monkey is more likely to find random, out-of-the-box bugs. But thanks to its unpredictability, the bugs found by a dumb monkey may be harder to reproduce. Some would even question the value of monkey tests since it may take a long time before they can find any bugs.

“But why?” 

I think monkeys are important friends of developers. Here’s a story:

Xuanzang finished his app. He used test-driven development procedures and wrote all the unit tests for all components of his app, including “comprehensive” UI tests. (At least that was what he thought.) His app passed all tests with flying colors. He were happy and released it to the public. A day later, weird UI related crash reports started to show up in his inbox. People had given his app negative reviews. And he were no longer happy.

“Why? The app passed all tests!” exclaimed Xuanzang.

His monkey friend came and touched a part of the UI that “a normal user would not think of touching.”

The app crashed.

The monkey friend relaunched the app and long pressed a button that “a normal user would just tap.”

The app crashed again.

You see, the monkey is not a “normal user.” As developers, it’s easy (and absolutely normal) to assume a certain user behavior when making test cases. Because we designed the app to function in a set boundaries, we may be blind to some behaviors outside of the box. A wise man once told me, when it comes to test cases, if you think you are already crazy enough, the real life will always out-crazy you. To save our sanity (and time), we need to ask our monkey friends for help.

“Where can we make friends with the monkeys then?” 

A lot of places actually. Not in the zoos though…

If you are an Android developer, your monkey friends are actually living right in your Android Studio! They run on your emulator or devices to produce random streams of user events.

For iOS developers, Apple has yet to introduce us any monkey friends. But we do have SwiftMonkey that lives in GitHub. It even has a framework called SwiftMokeyPaws to let you see where your monkey friends have touched in real time in your app’s UI.

Those are only a few examples. If you search, you can always find a monkey friend with a very particular set of skills…

Ethical Dilemmas of Technology: Privacy

By Xinye Ji

170328103911-internet-privacy-780x439
“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

-Edward Snowden

It’s difficult to overstate how much privacy we give up to have access to services such as Facebook, Twitter, Instagram, Google, or even Amazon. With every click, whim, or action, we tell the services we interact with a little about ourselves bit by bit. As time goes on a clearer profile of each person is created, and interesting conclusions can be drawn from these profiles.

The movie Yes Man satirizes the worst case scenario when Carl, Jim Carrey’s character, is arrested on charges of potential terrorism due taking flying lessons, studying Korean, approving a loan to a fertilizer company, meeting an Iranian woman, and buying plane tickets at the last minute. While the situation was comically exaggerated, the film brings up an interesting social commentary of the state of our privacy. We run the risk of pushing ourselves further into a surveillance state.

Speculatively, one could argue that applying this data aggregation comes from a place of good will. In fact, terrorism prevention is a frequent argument that the US government makes to continue the activities of the NSA. Aside from crime prevention, there are many different applications to utilize aggregate data. For example, aggregated data now informs us about real-time traffic. Our aggregated profiles can tell us about possible car repair. Are these conveniences worth giving away your information? What about if Target could tell you if you were possibly pregnant based off of your purchases? Or what if Facebook could tell if someone was about to commit suicide? Or what if Microsoft could tell you that you have pancreatic cancer a month before you knew it yourself? Are these advancements in technology worth the loss of our privacy?

This blog post might seem a bit eclectic and contrarian to itself. I don’t intend to draw any conclusions of this topic as we could go on forever about it. However, I do believe this ethical dilemma is something that we, as a society, have to decide on in the near future. For the time being, however, I hope that the average citizen considers and understand how to protect one’s liberties in the digital age.

Would I give HomePod a home?

By Peng Xie

apple-homepod

Apple’s highly anticipated speaker is finally released after a few months of delay. I happen to be in the market for some new speakers. Would I give HomePod a home then?

The good

When it comes to speaker shopping, sound and build qualities are two most important aspects I care about. Being an Apple product, HomePod obviously inherited the same beautiful, high quality design from the family. And as a speaker, HomePod does not disappoint either. After some complicated measurements, a Reddit user on r/audiophile claimed that HomePod is “100% an Audiophile grade speaker.” The review has been picked up by multiple publishers as well as Apple’s Senior VP, Phil Schiller. Although later on the measurements were regarded as inconclusive after being examined by other Reddit users, HomePod did receive overwhelmingly positive comments on sound quality from reviewers. Personally, I’d like to see some scientific measurements and comparisons on HomePod against other high end speakers. Even if I decide to buy one, I still want to give it a listen myself if possible since sound quality can be subjective after all.

The bad

Now let’s put aside the Apple-colored glasses and check the HomePod from a “different” perspective. First, anyone who is considering getting a HomePod should know you’re buying into Apple’s ecosystem if you’re not already in it. Though it may simplify the setup procedures, keep in mind HomePod would only stream from your own iTunes library via AirPlay or Apple Music which is a paid subscription. No Spotify, Amazon Music or Audible.

What is more (worse?), HomePod doesn’t support Bluetooth either. So if you don’t have any device that supports AirPlay, HomePod is pretty much unusable. And a big NO-NO for me is that HomePod doesn’t have any wired audio inputs for more traditional devices like a turntable.

The least of my concerns on HomePod is the aspect of being a smart speaker. But this may very well be as important as sound quality to me to other potential buyers. If you have any experience with Amazon Echo’s Alexa or Google Home, you may find Siri on HomePod pretty basic. It can’t learn new skills and some features won’t be available if the paired iPhone is not nearby. So, don’t get your hope up.

And the ugly

Warning: HomePod ruins furnitures!

This is literally “ugly”. Multiple sources have reported that HomePod will leave a white ring on certain wood surfaces. Apple acknowledges this situation and suggests that the ring will fade or can be cleaned using “manufacturer’s suggested oiling method.”

Okay, whatever you say Apple.

Conclusion

You may already be able to guess that I won’t be getting the HomePod. Well, at least not now. I have a first generation Amazon Echo at home and recently bought a Echo Dot to use in a different room. None of them are particularly smart and Alexa still has problem differentiating my Philip Hue lights and scenes if they have similar names. For sound quality and usability I think traditional speakers and amp setup can still run rings around HomePod for same amount of money. Apple made some pretty impressive speakers and headphones before, so I have confidence in the future of HomePod. It’s just not for most people at the moment.

Kotlin: A Learning Process

By Xinye Ji

For those who aren’t in Android development circles, Kotlin has been on the rise since its beta release in 2011. At Google I/O 2017, Kotlin was announced to be officially support and adaptation of the language has soared in the last year. I recently decided pick up the language and hope to implement the upcoming features in our products with Kotlin.

Obviously, switching to a new language always has its hurdles. Syntactically, things always take a while before you get into a groove, but that’s always a part of the expected learning curve.

One of the main benefits of Kotlin over Java is that Kotlin seems to value making code a little more condensed. For developers who aren’t particularly fond of Java’s particularly verbose nature, I think this is a welcome change. There aren’t any massive paradigm shifts, but many smaller changes add up to a more streamlined development experience.

But aside from these mostly aesthetic changes, Kotlin takes many design cues from Joshua Bloch’s Effective Java. For me, this had some mixed results. Some of my lazier programming practices cropped up, resulting in me rethinking and revisiting my thought process when implementing some features in my sandbox app.

Thankfully because Kotlin is developed by the same company that produces Android Studio, a lot of these things are pointed out by the IDE itself. For example, my biggest lazy habit was not considering mutability. In Kotlin, one has to declare their variables as mutable or immutable with var or val respectively. If Android Studio, (or rather Intellij) detects that something you have declared is not changed after you’ve created it, it’ll flag that line and suggest you change the it to immutable. If I had to encapsulate a lot of Kotlin’s language design decisions, I’d say that it almost forces you to make better architectural decisions by employing an opt-out mentality, rather than an opt-in mentality.

Overall, I think this makes the learning curve more difficult, as it can produce unexpected behaviors if you haven’t read all the documentation surrounding certain features in Kotlin. But in the long run, I think this will lead to higher levels of productivity.

On a more personal note, I don’t think I have any particular preference towards Java or Kotlin yet. However, I’ve been working with mostly Java for the past few years, so there are still many things I’m still learning about Kotlin as I play around with it more. So one should consider that my familiarity with both languages are definitely not equal. In general, I’d say that (at the very least) trying Kotlin is a worthwhile endeavor for any native Android developer.

Apple Special Event 2017

Apple Special Event 2017: A New Hope?

By Peng Xie

apple-event-2017-september-logo-610x659

(Originally written on Sep 19, 2017)

Last week, the Cupertino company opened the door of its new campus to guests with a special event led by a touching and inspiring tribute to Steve Jobs in the theater named after him. During the 2-hour keynote, CEO Tim Cook along with other familiar faces presented the world a fleet of next-gen Apple devices that immediately captured headlines of all major tech websites and publications regardless the leaks and speculations happened before the event. So, what does this special event mean for Apple and tech industry? And more importantly, what does it mean for us, the consumers?

In my opinion, this special event is the most important events for Apple in years. When Steve Jobs introduced the original iPhone and App Store, Apple virtually changed the whole smartphone industry and turned itself from a computer and music player manufacture into a leader in mobile devices, software and retail industries. Since then, even though there were still good yearly updates and new(ish) product releases from Apple, I felt a trend of slowing down…. until this year’s special event.

(I’ll be discussing the new products from the special event in the order of how I remembered them and how important I think they are, so pardon me if it is different from the order of how they were introduced.)

Apple Watch Series 3

Starting with Apple Watch Series 3, while mostly left unchanged in term of design, more powerful internals and the addition of LTE are definitely welcoming improvements. This is nothing new since there are already other Android Wear devices that have LTE capability. And given that the new Apple Watch’s LTE will only work in the country where it is purchased, I think Apple will continue to try to win over customers with the refinement of the Apple Watches instead of features. The thing that interests me most in the LTE Apple Watches is actually the internal SIM card. Apple is an active player in pushing new SIM card standard and has used similar technology in previous iPad models. I’d like to see Apple put this technology in future devices and make it another industrial standard just like what Apple did with micro and nano SIM cards in previous iPhone models. Telecom companies would welcome this feature since it can reduce cost of making SIM cards while give better control on devices activated on their networks. However, for consumers, an internal SIM card may not be preferred by some. Using a local SIM card while traveling will no longer be as easy as swap out SIM cards. To utilize the full benefit of an internal SIM card, phone manufactures and service providers should really come up with ways to streamline the activation experience for customers.

Apple TV 4K

Next one is Apple TV 4K. While the device itself is more like a catching up with other streaming devices to some people (Personally, I love tvOS and I think Apple TV has huge potentials.), the real exciting news is that Apple is making quality 4K contents more accessible than ever. Years ago, a format war between Blu-ray and HD-DVD backed by multiple big companies and studios like Sony and Warner Bros. made HD contents widely available to general publics. But today, even with the popularity of 4K TVs and YouTube videos, we rarely see 4K movies sold online or in stores. With Apple upgrading the movies in iTunes Store to 4K without raising the price, I believe other retailers and content providers will soon step up their games in 4K, which will in turn benefit consumers greatly with not only increased availability but (hopefully) also reduced cost of contents and devices.

iPhone 8 and iPhone X

Last but not the least are the iPhones. The same “magic” formula used in iPhone 6 is still being used in iPhone 8/8 Plus while Apple “revolutionized” the smartphone design with iPhone X. Funny enough the stunning glass back design on both iPhone 8 and the 10-year anniversary iPhone X is actually a throwback to iPhone 4 instead of the original iPhone. But this I think it is mostly due to the newly added wireless charging feature. The iPhones are using an older Qi standard so I don’t think there’s much to discuss on that. What’s most important to me is the heart of the new iPhones, the A11 Bionic chip. In an interview after the event, Apple’s marketing chief Phil Schiller said Apple started development of A11 chip 3 years ago when A8 chip was shipped with iPhone 6. The focus on graphics and neural networks processing in the design of A11 Bionic chip can give us a glimpse into Apple’s ambitions. Along with the introduction of ARKit and Core ML in iOS 11, Apple is obviously venturing into a future of augment reality and machine learning.

This is more or less a trend in the industry, but not all companies have the expertise across design, manufacturing and software development like Apple does. The Verge reported that the “notch” on iPhone X is as complex as a Microsoft Kinect. Packing all those components in an area that small is simply amazing. Paired with the powerful A11 chip, it will surely give iPhone X the ability to raise the bar for facial recognition technology. In addition, FaceID, AR and other processes will be great help in training Apple’s machine learning algorithm. A more intelligent device can significantly improve user experience with the help of Core ML framework.

Aside from all the praises, there are concerns over the brand new iPhone X. The “notch” design is not loved by all. The security of FaceID is yet to be tested by the public. And unsurprisingly there are privacy concerns regarding to the FaceID feature. To me, those are all valid concerns and healthy discussions. Not only Apple, but all manufactures and consumers should know what they’re dealing with and getting into. I’d like to see the big players like Apple, Google and Samsung work together to come up with an industrial standard to ensure the devices meet minimum security and privacy requirements.

Still a leader?

I believe the answer is positive. With a 50% sales growth, Apple replaced Rolex as the biggest watchmaker in the world. And the tech giant is now championing for a future of 4K contents. Even though iPhone 8/8 Plus mostly remain unchanged and iPhone X’s minimum bezel design is not a first in the industry, Apple will still be able to influence the design language for future smartphones. When we look back in a future of augment reality and devices with outstanding learning capabilities, I think we would all agree that this year’s Apple Special Event is as significant as the one where Steve Jobs introduced us to a legendary device called iPhone.

Here’s to the next 10 years!

WWDC 17: Why am I excited as a developer?

By Peng Xie
wwdc17-og

WWDC 17: Why am I excited as a developer?

It’s been a week since WWDC and I finally got time to write this blog post to express my excitement as an iOS developer. I won’t be talking about the keynote, since it’s pretty much the same every year. Instead, I’ll be focusing on the real deal, WWDC 2017 Platforms State of the Union.

For the non-iOS-developer readers, Platforms State of the Union is a session in WWDC that gives attendees a more technical overview of what’s coming to Apple’s platforms, as its name suggests. Just like previous years, Apple made some really big announcements to its developer community. Let’s see some of my personal favorites.

Xcode 9 Source Editor

As an iOS developer, I use Xcode everyday, which sometimes can be a pain. Xcode’s performance and features are not that strong comparing to some of the competitors. This year, Apple introduced one of the most welcoming changes to Xcode in my opinion – they have re-written the whole source editor from ground up in Swift! The result? 3x faster file opening seed, 60fps scrolling and up to 50x faster jump-to-line action. On top of that, they also implemented an integrated markdown editor, improved coding issue presentation and tokenized editing. What’s even better? A brand new refactoring engine and workflow that is powered by an open-source transformation engine. IntelliJ users may not be that impressed with these improvements. But to me, the all new source editor will give me a huge boost in productivity. I can’t wait for it to come out of beta… (Rule of thumb, don’t use beta Apple softwares on production development works.)

Swift 4

Not surprisingly Swift 4 will be there with Xcode 9. Apple has vastly improved one of the most widely used classes in Swift, String class. In Swift 4, String is now a range-replaceable bi-directional connection, meaning it behaves and can be used like an array of characters now without any sort of conversions. Thanks to the underlying improvements, String now provides 2.5x-3.5x faster processing depend on the language it’s in. Another welcoming news is the introduction of “codable” type. The new type will be synthesized by compiler and has the ability to perform 100% type-safe JSON encode/decode with only one line of code. Apple also made it easier to adopt Swift 4 in Xcode 9. The compiler now supports both Swift 3.2 and 4.0 and allows developer to mix-and-match 3.2 and 4.0 targets. All these improvements makes Xcode 40% faster building large mix-and-match Swift/Objective-C projects. Moreover, building projects using multiple Whole Module Optimization targets is now 2x faster!

iOS 11

One of the biggest announcements in WWDC 17 is iOS 11. For users, iOS 11 blurs the line between a desktop and a mobile OS, which will finally make iPad Pro a viable productivity tool. For developers, this means new APIs to play with. Starting with the new Drag-and-Drop feature, Apple did a phenomenal job making it easier to integrate in apps. It’s automatic for text and web content, and has delegate protocols for customization similar to other iOS APIs. With its cross-process, system-wide multi-touch support and built-in data security, I’m sure developers will start to provide this new feature in their apps to users as soon as iOS 11 becomes available.

Good news for everyone

Along with Xcode 9, Swift 4 and iOS 11, Apple also introduced CoreML for machine learning, Metal2 graphic engine and ARKit for virtual reality. These are only a few that caught my eyes. I am really excited to learn more about CoreML and hopefully can put it to use in one of our apps someday. I truly think Apple has given us developers really good tools and platforms to provide users best features and experiences. This is good news to developers as well as users. A better Apple will surely push its competitors to step up their game, which is something I really like to see. Whether or not you’re a developer or iOS/macOS user, you should be excited too. As consumers, we will always benefit from the competitions.

Get to know “Markdown”

By Peng Xie

Markdown-mark.svg_-1024x630
What is Markdown?

Markdown is a lightweight markup language that is natural to write and easy to convert to rich text and other formats such as HTML or PDF. Because of its simplicity and portability, it has become the go-to option for developers to document their codes and README files. Find a random repository on GitHub, you’re likely to see at least one file in it written in Markdown. In addition to developer communities, Markdown is also supported in a variety of other places such as blogs and forums. Even some instant messaging apps now have Markdown-inspired formatting features.

Markdown in GitHub

As one of the most popular places where people use Markdown extensively, GitHub actually has its own version of Markdown syntax which is called GitHub Flavored Markdown (GFM for short). Being a Git hosting service, GitHub uses GFM to provide users additional features such as the ability to reference issues, pull requests and SHA-1 hashes of commits.

Markdown in WordPress

WordPress supports Markdown as well but you have to enable it first in your blogs settings. In Settings under Configure section of the side menu, you can turn on Markdown support for composing in the Writing tab and for comments in the Discussion tab. After saving the settings, you can start to write your new blog posts in WordPress using Markdown. As a matter of fact, this whole blog post is written in Markdown!

Time to play?

Now that you know more about Markdown, let’s see some examples!

Codes

You can either put your code inline like this or add a code block in your file like what I’m going to show you below.

This is a code block!

The syntax for inline code is to wrap your code inside a pair of `.
To use a code block, simply put ``` at the lines above and below your code block.

Links

[link to Google](https://google.com)
will be rendered by a Markdown viewer as
link to Google

Lists

* Unordered list item 1
* Unordered list item 2

1. Ordered list item 1
2. Ordered list item 2

The code block above will be rendered as:
* Unordered list item 1
* Unordered list item 2

  1. Ordered list item 1
  2. Ordered list item 2

That’s just a few simple examples of Markdown. There are many other ways and styles to write in Markdown. If you are interested, you can checkout GitHub’s guide on Markdown and the Markdown support page for WordPress.

Also, checkout this awesome post on Ray Wenderlich for some recommendations on Markdown editors for MacOS.

Bonus

Hey, thanks for reading this blog post! Here’s a bonus section for you! Do you know that you can make presentation slides using Markdown as well? I found an interesting presentation writer on GitHub called Marp. Just separate your slides with --- between empty lines and you can literally write a whole presentation using Markdown in a single file!

The Future of Work and Automation

automation.jpeg

By Danny Kulas

(Before we go ANY further, I would like to recommend the video “Humans Need Not Apply” so as to provide a better understanding of what I’ll be discussing in this article)

Robots are coming to take over the world, steal our jobs and invade every nook-and-cranny of our lives.  Now, as dark and ominous as that may seem, it would be a lie to say this isn’t already happening.  In fact, robots are here, they are stealing jobs and have (just about) edged their way into every facet of living and being and working (and have been doing-so for quite some time).  Now, when I refer to robots I’m more specifically referring to automation and robots and artificial intelligence and machines, in general.  So, what exactly is automation?  Is it even an issue and should we really be concerned?  Will there be any jobs left?!  Let’s take a look and find out.

What’s the Problem?

The problem is that jobs that people once held are increasingly being replaced by robots and automated processes.  Although, this isn’t all that shocking, as “machines have been taking our jobs for centuries (Rachel Nuwer)” (think the steam engine, or the advent of the automobile replacing the necessity of the horse).  This process of technological innovation replacing the human workforce is not new and will continue to increase at a rapid pace as time progresses and therein lies the crux of the problem:  technological innovation is moving forward at such lightning-quick speeds that many segments of the workforce can’t keep up.

We’ve already seen instances of workforce-automation, some of which have been around for decades, hiding in plain sight.  A few simple examples (that you likely interact with on a weekly, if not daily, basis) that come to mind are the automatic-teller-machine (ATM) replacing bank tellers and self-checkout lanes replacing grocery-store clerks and cashiers.  Also, “technologies such as payroll-processing and inventory-control software, factory automation, computer-controlled machining centers, and scheduling tools have replaced workers on the shop floor and in clerical tasks and rote information processing (Andrew McAfee).”  If there is something that can be automated or done with robots, you can bet that it will more than likely become automated.  All of those fast-food workers protesting for $15/hour minimum wage might find themselves out of work permanently thanks to the burger-flipping-robots of the future.

Who Will Be Affected?

Everyone will be affected and there will be no jobs left for anyone.

OK, it won’t be that draconian but just about everyone could be affected.  If you watched the video at the top of this article then you’re well aware that machines can do just about any job a human can and so long as the robot performs as good or better than their fleshy counterparts, then we’re all in for a rude awakening.  That should make anyone concerned.

Now granted, machines won’t come in over-night and gobble up every job that is available, it takes time in the form of years (and even decades) to bring much of this technology to market and for it to be adopted and implemented in the workplace.  Over the years humans have been replaced by machines to conduct the repetitive and mundane tasks (low-skilled) that once belonged to the human worker.  Take the assembly line, for example.  Where hundreds of people could find steady employment, now only the help of several automated robots is needed and at a severely discounted price.  Machines taking jobs that were once held by humans is only a small piece of the problem, though, and an even larger issue we’ll be facing is what to do with all of the people fresh-out-of-work without the proper training and advanced skills to move into a new job.

“Many people fear a jobless future – and their anxiety is not unwarranted:  Gartner, an information technology research and advisory firm, predicts that one-third of jobs will be replaced by software, robots, and smart machines by 2025 (Kathleen Elkins).”  During the Great Depression, the unemployment rate was at 25%, now let that sink in as we push ahead.  While my previous examples highlight rather low-skilled jobs being replaced by robots, fear-not white-collar-community because the machines are coming for your jobs as well.  “Artificial intelligence and robots are not just challenging blue-collar jobs; they are starting to take over white-collar professions as well (Kathleen Elkins).”

Will any jobs be safe?  The short-answer: Yes.  Computers, generally, aren’t that great at interpreting emotions, displaying empathy, purveying comedic relief, or creating new and original pieces of art or music, among other things, but “that doesn’t mean nobody is trying.  Researchers in Arizona are trying to teach robots to appreciate poetry, and a Parisian robot is already able to emulate popular composers and create new music in their sonic likenesses (Jack Smith IV).”  And while inventors and innovators will try to introduce so much of this technology to the marketplace, it’s usability hinges on whether or not a person would rather interact with a machine or a human, depending on the task at hand.  Even if a robot is better at a certain task than a human, there is a high probability (this high-probability certainly depends on the task in question) that many people will continue to choose to interact with actual people (think: empathy) and not their metal-and-chrome counterparts.

What Can We Do About It?

With all of the jobs automation has removed from the workplace, many people will ask “What can I do to stop this?”  NEWSFLASH: You Cannot Stop The Robots.  Seriously, there is no stopping the “Second Machine Age (Kathleen Elkins)” and all you can hope to do is become part of the solution (or have saved and invested enough to begin retirement early).

How do you do that?  Go back to school, learn a new skill or apply (if available) for job-training.  It is a daunting task, especially for someone being let-go at an older age doing a job they’ve known for the last x-amount of years, to go back to school.  You might have a home or family (or both) to take care of and can’t afford to substitute work-hours for study-hours.  This will be the case for a large majority of people who find themselves out of work.  I know “go back to school” sounds like a simple answer to such a large problem, but frankly, “we haven’t experienced anything quite like this before.  Even though machines did more and more work and the population grew rapidly for almost 200 years, the value of human labor actually rose.  You could see this in the steady increase in the average worker’s wages.  That fueled the notion that technology helps everyone.  However, that kind of success is not automatic or inevitable.  It depends on the nature of the technology, and the way individuals, organizations, and policies adapt.  We’re facing a huge challenge (Erik Brynjolfsson).”  There’s no doubt that the challenge we’re facing is massive, a challenge I believe we’re ill-prepared for.

One way we can ready ourselves is through education.  This is not a shot at teachers, but more the school boards and education system, in general.  It’s long been time that we upgrade our curriculum to better reflect the growing needs of the workplace.  We need to be focusing more on (read: NOT exclusively) STEM courses (Science, Technology, Engineering, Mathematics) and implementing them across the board, for all ages and starting at an early age.  As a person who loves history and really enjoyed my social-science classes all throughout my schooling career, we need to realign our education-goals and what our children (the leaders and workers of tomorrow) will be learning.  I’m not saying we should throw everything we know about teaching out the window and solely focus on STEM areas of study, but what I am saying is that we should be doing a better job of providing these classes to all students and ultimately, making many of them a requirement. But as I mentioned in previous paragraphs, there are professions that humans excel at that robots and machines haven’t been able to match.  With that in mind, “primary and secondary education systems should be teaching relevant and valuable skills, which means things computers are not good at.  These include creativity, interpersonal skills and problem solving (Andrew McAfee).”  Again, I cannot stress enough that I don’t believe we should only be focusing on STEM areas of study but that we should be integrating those study areas into our curriculum at a faster pace.  Having students remember state capitals or the elements of the periodic table may not necessarily be the best use of classroom time.  This is just one example of how we could better prepare ourselves and future generations.

More Jobs?  Less Jobs?

“Digital technologies will bring the world into an era of more wealth and abundance and less drudgery and toil.  But there’s no guarantee that everyone will share in the bounty, and that leaves many people justifiably apprehensive.  The outcome – shared prosperity or increasing inequality – will be determined not by technologies but by the choices we make as individuals, organizations, and societies.  If we fumble the future…shame on us (Erik Brynjolfsson).”  Indeed, Erik is correct in saying that much of what is yet to come hinges on our ability to make knowledgeable and informed decisions, individually and as a group, but to say one way or another that this second machine age will create or destroy more jobs remains to be seen.  If people loiter on the sidelines and don’t get the re-education or job-training they will need to transition into a new, high-skilled role, then yes, it is highly possible that robots will destroy more jobs than they create, but to no fault of their own.  Before the web came along (and even soon-there-after), no one could’ve imagined that their would be job titles such as “Social Media Manager” or “Data Mining Specialist” or “Application Developer” and so-on.  I believe it is safe to say that this second machine age will bring about an abundance of jobs but the caveat is that these jobs will require higher-skilled workers.

Conclusion

The Second Machine Age is quickly approaching. Many people will be under-prepared to face the new challenges of the workplace and because of that, will be left with a decision to make.  For those who are prepared, there is peace in knowing that you have placed yourself in a position to succeed and become part of the solution.  There are monumental strides still to be made in addressing this “coming of age” but it is becoming increasingly difficult to plan for a future that so many know so little about.  Take the time to read about emerging technologies, or teach yourself how to code (there are so many free resources on-line it’s almost overwhelming) or any new skill that will be highly sought after.  It will only benefit you as a person and as a professional and you may even find something that you never thought you’d enjoy.

Web vs Native – Will it ever end?

By Danny Kulas

Web vs Native – Will It Ever End?

webvsnative

The web is dead.  Native is the only way forward.  These two statements (or any combination of them) could not be further from the truth and yet, these statements alone have created a crusade in which people are marching for the wrong reasons.  Designers and developers are becoming increasingly vocal on whether web or native is the answer for developing an application and framing the issue like that, i.e giving an ultimatum, is detrimental to the continued development of both platforms.  There is no silver-bullet for developing a new product and outright declaring one solution superior to the other will not only hurt your growth as a programmer but will have adverse-affects on your end product.  Knowledge is power and understanding and applying that knowledge is critical when beginning development of a new application. This is an issue for a number of reasons, some of which include:

1. Not understanding your business goals and needs

2. Not understanding how web and native solutions can (and often do) work together

3. Some people are just close-minded

4. And the list goes on

“Let’s embrace the advantages of both native and web to create more holistic mobile experiences (Brad Frost).”  Brad makes a bold statement, one that won’t soon become the widespread norm (although, it would be nice if it caught on soon!), but it does give us a goal to work towards.  In the following paragraphs I hope to show you why both mediums are desirable and give you the knowledge to know which solution is the proper one for your situation.

Why Web?

When developing a new product, designers and developers will choose web as their main platform for a number of reasons, chief amongst them being low friction to entry and immense reach.

“The average mobile user visits far more sites per month than individual apps – I’d postulate because installing an app has a somewhat high cost; you have to knowingly wait for the app to install, possibly agree to a EULA-like list of permissions that you may not understand (or more likely, haven’t bothered to read), and then you’ll be reminded of this relationship forever after by that icon in your home screen (if not notifications popping up) (Chris Wilson).”  On-top of this, many app stores command a cash-fee for being able to deploy your app within their ecosystem, which can immediately price people out of their app store environment, or creating a native application all-together.  I can vouch for Chris’ sentiment when it comes to downloading certain native applications.  In fact, there have been times when I’ve downloaded an app from the Apple App Store and after attempts to sign-up, set-up and accept any terms and conditions, I was already desensitized to the idea of using the app.  Besides, I know that I can go find the same information I’m looking for on a native app on the web, as well, and oftentimes with less barriers in the way of that information.

This is what Chris is referring to when he mentions “low friction to entry” in his article.  In a day-and-age when everything is instant and people want information or content yesterday, waiting around for benign processes to complete can be a disadvantage that could potentially have devastating results.  Being able to reach users all over the world is a massive advantage the web holds over native applications.  For example, perhaps there is a game in the Apple App Store that you dearly want to download and enjoy for hours on end but can’t, because you own XYZ-smartphone running on a different operating system than the game was built for.  If it’s not clear to you, that means you won’t be playing Smash The Hamster anytime soon on your smartphone, unless of course the team behind the game decides to expand their code base.  Native applications, by their nature, raise the barrier-of-entry significantly, providing an experience only to those who own a smartphone that runs on an operating system that the original application was built for.  This is not the case with the web.  Whether you’re using FireFox, Chrome, Opera, or (God forbid) Internet Explorer, you will still be able to access the same website with the same features, anytime, anywhere, on any machine in any browser.  I believe that is what people in the industry refer to as ‘reach’ and ‘accessibility’.

In addition to why the web is preferable over native is due to it’s low amounts of “junk”.  When you download an app from any app store, it remains on your phone.  Whether you use it once, twice, or never, it will still be there, taking up space and running updates in the background using up your data, which does cost the end-user money.  Ever get those text-messages from your cell-phone service provider saying something along the lines of “You have exceeded your monthly data limit and will be charged blah blah blah.”  You can thank your apps (the ones you’re using and even the ones you’re not) for that.  On the other hand, websites are ephemeral, meaning that they aren’t lasting and by that I mean a user doesn’t need to download anything to get the content they’re looking for.  They [the users] can go to your website (many times as an anonymous visitor, i.e most news websites), absorb content and then leave, without so-much as a hiccup getting in their way.  There is no “junk” remaining on their phones or in their browser and the user (likely) never had to provide any personal data that would reveal their identities.  Get in, get out, get on with it.  And last, but certainly not least, you only have to maintain one code base in order to reach all devices and platforms, whereas in native app development, you may have to develop for iOS, Android, Blackberry, Windows, etc.  That’s 4 different code bases that a company would need to devote time and resources to in order to deliver their product to the end user.

Why Native?

Many teams choose the native route for several reasons, but chief amongst them is the access to the device’s hardware and features as well as the sheer amount of people who own a smartphone.  “Today there are about 2.6 billion smartphone subscriptions globally, and while growth has been leveling off in developed markets like the U.S. and Europe, it’s not stalling altogether by a long shot (Ingrid Lunden).”  With numbers like that (and still growing) it is abundantly clear, for this reason, why developers would choose to go native.

“Native apps, since they are built specifically for that device, get access to all of the features of that device.  iOS and Android devices all have accelerometers, GPS location, magnetometer (Compass), cameras and a whole host of other features.  With native applications you get to use these features and it’s usually a pretty simple addition to make to the app.  With web apps you struggle to access many of these features and implementing them can be a huge pain (Dave Leverton).”  While the web has certainly made great progress, it is still lagging behind native in terms of feature-interaction and using them in your design.  Sometimes, designers like to mirror native on their web applications and that can often be a massive challenge because the tools you need aren’t necessarily “under the hood” much like they are for native applications.

Another huge advantage of designing a native application is that they “are given access to all the tools of the trade when it comes to building the UI for the apps.  You are given all of the standard user interface items, such as the iOS standard navigation bar, or the Android’s action bar.  Having access to these user interface items can make the app look and feel like it was made by Apple or Google themselves.  On a web app these items have to be mimicked to look and feel the same, meaning that a lot of the time ‘things just don’t feel the same’ (Dave Leverton).”  This is quite true, too.  Many times, clients will ask for a website that looks and feels just like a native iOS (or Android) app and this is often hard to achieve due to the reasons mentioned above.

“When going the native route, you can leverage the marketing power of the app store.  It doesn’t really matter which store.  The fact is, they all try to help market your app with free exposure or promos you wouldn’t get otherwise (Rachel Appel).”  Just by being placed in an app store you are thereby given free advertising.  You don’t need to do anything (aside from having an application that will actually benefit people, but that’s an entirely different topic for a different day) for people to see or find you in the app store.  Considering FireFox, Chrome, Opera, etc. don’t do anything on your behalf to market your website, I’d say this comes as a big upside.  Now, I’m not saying that this free marketing and advertising provided by the app stores are the end-all-be-all, not at all, but it is a nice compliment that browsers simply don’t provide.  Marketing a product can be a monstrous task and any added help (no matter the size) is beneficial to your end product.

Moving Forward

As the industry continues to move forward, big strides in innovation and implementation standards will be made that will better synthesize these two platforms.  To continue the argument of native vs web would be a disservice to the community as a whole and instead we should be holding court on how these two mediums could together and independently to provide a richer experience for the end-user.

“Properly designed web apps in the modern world can be incredibly responsive, and with re-engagement features like push notifications being added daily, the web is now a viable platform for engaging user experiences (Chris Wilson).”  Things that go-fast are no longer relegated to planes and automobiles (or native apps, for that matter).  Website innovation is evolving at an alarming rate, browser feature-adoption is plowing ahead at blazing speeds and all the while end-users are benefiting with a web ecosystem that has their concerns in mind.

At the same time, “there will always be reasons to build native applications.  It’s quicker to innovate platform APIs when you don’t have to go through standardization and browser implementation (Chris Wilson).”  As a web developer, I can certainly agree with Chris’ concerns regarding “red tape” and the world wide web.  While it is good to know that there are groups of people helping to push the web forward in a progressive manner, this can oftentimes add months (or years) to the release of any new features.

Conclusion

When you go to the golf course, you wouldn’t exclusively use your driver throughout the duration of the round, while neglecting other clubs that might (probably) will do a better job.  Much in the same way, the web will not solve all of your problems and neither will native.  Understanding your business goals and needs is paramount in beginning to understand what path to take when starting development of a new product.  Each platform has it’s own strengths and weaknesses, all of which should be carefully considered.  “At the end of the day, it’s about humans creating and sharing content, so don’t make the mistake of thinking native apps and the web are somehow opposed to one another.  Whether it’s your tap or your hose, the water in your house comes from the same place (Charles Lim).”

How to learn iOS Development (Part 3)

By Peng Xie

[You can read previous parts of this series here: Part 1  Part 2]

It’s been a long time since the second part of this series was posted. Hopefully my experience on note taking is helpful. In this final part of the blog post, let’s talk about how a developer can evolve from “just being a developer.”

Sky’s the limit

It’s common sense to keep learning for professions like developer. I’m at no position to tell someone what he/she should learn but I think there really shouldn’t be any limit on what a developer can learn. Making an app from scratch involves efforts from different aspects such as design, project management and so on… For instance, basic image editing skills are easy to pick up and will make things way easier when you want to make some small changes to an image you want to use in the app. And knowledge on project management can help you better plan and estimate your development work. In my opinion, having those knowledges and skills can be extremely useful for any developer works independently or in a team. After all, you don’t always have a graphic designer or a project manager in your team.

Always something new

iOS (or Cocoa in general) developer community has been active for a long time and there’re tons of awesome things that satisfy all kinds of needs for developers. For example, ReactiveCocoa framework provides additional APIs for functional reactive programming using Objective-C. And tools like Reveal makes tasks such as debugging user interfaces much easier and efficient. The geeky side of me always like to spend free time to discover and play with new frameworks and tools. It’s an interesting way to learn about new coding styles and enhance your own apps from those open source frameworks. I usually visit Ray Wenderlich’s tutorial website, NSHipster and objc.io to learn about new frameworks and tools. Websites like CocoaControls are also good places to discover UI related frameworks.

One for all, all for one

We benefit a lot from other developers with all their fantastic works on different frameworks and tools. As members of the developer community, we should give back to the community and help other developers whenever we can. Contributing to open source projects and answering questions on websites like Stack Overflow are good ways to learn from other people. We can learn about types of issues other people encountered and discuss different approaches on fixing all kinds of issues. Moreover, having a good profile on GitHub and Stack Overflow also makes you stands out from other people when you try to find a new job.

Where to go from here?

It’s actually never possible “just being a developer.” Sometimes you have to be a graphic designer or a project manager. And if you want to, you can also be an adventurer, a contributor or a tutor. Let us know in comment section what you think about this series and what other topic you want to see in the future.

Keep learning and happy coding!