Some time in 2018 I realised we’ve reached that point in the Android ecosystem that so many platforms reach as they approach the 10 year mark. There was more stuff coming out than it was possible to keep up with, or even try out.
So I thought, let’s draw up a high level map of most things Android developers have come into contact with, in general, since the start. Links to large-sized files are at the end…
Android Ecosystem: 2008-2018
Roughly speaking as you move out from the centre you are moving forward in time, although to group some items by theme I bend the rules. The lines represent relationships, though not always direct, and the dotted boxes are things that may no longer be in active use. There are also some 3rd-party honourable mentions in there.
I was prompted to finally make good on this diagram when I listened to Donn and Kaushik talking about Imposter Syndrome on Fragmented. Take a look at the image, this isn’t even the complete picture. At the same time many Android developers are doing other stuff; iOS, server-side, web, Flutter. No wonder it’s hard to keep up.
The same thing happened for me around 2008 with Flash. I started with Flash in ’99, there was timeline animation, a scattering of “scripting”, all highly creative. Over the next 10 years it evolved; XML layouts, 2-way data binding, ECMAScript 4th Edition (what eventually became Javascript “Harmony”). It found a home in video, games and the Enterprise; server-side generators came out costing $15,000 per CPU and that was just the start. “RIAs” (Rich Internet Applications, aka thick clients) were lightyears ahead of the rest of the web. In the UK you could command a top-tier day rate working for banks as a freelancer, building internal tools that managed their data and generated reports. Then, as you know, it stopped.
Android is not going to live forever, but things have certainly moved at such a pace to keep things interesting. From phones to TVs, cars, smart speakers and more, the surfaces available have exploded, and the tools also. We’re gonna need a larger sheet of paper.
A huge thank you to the people who make learning this stuff possible. The bloggers, the developer advocates, conference speakers, podcasters and documentation writers. 🙌
Download the source and exported PDF/PNG files from the GitHub repo.
v2ray加速器安卓下载-outline
In this post I want to discuss the subject of Value Objects, their purpose and some ways of easily implementing them in Java, specifically, although not exclusively, within the context of Android development.
Continue reading “Value Objects in Java with AutoValue and Lombok”
v2ray加速器安卓下载-outline
Messenger-based services, bots, agents, AI. It looks like app fatigue has led us to look to these for the next green field, something new for VCs to plough their money into, something that feels different.
From time to time technology comes full circle and here we are again using something like IRC, in the UX slam dunk that is Slack, and setting loose upon it an army of bots… again, like we do/did with IRC. Of course both of these things are significantly evolved from their forebears; the semi-public messaging platform (albeit, now less suited for massive audiences) but also the bots, who once were relegated to performing simple tasks like running file shares or hosting quizzes for a handful of geeks, are now powered by significant “AI” resource and connected to millions of people and myriad services from Uber to Dominos.
v2ray加速器安卓下载-outline
AI, in the sci-fi movie sense, feels like it’s been “a decade away” for as long as I can remember. In reality IA (as the case may actually be) is already in use and has been with us for some time, just in a very limited and low profile capacity, with the exception of IBM’s Watson kicking butt on Jeopardy perhaps. What we are now seeing is that potential being unleashed in consumer-space and the results are going to change HCI yet again.
Who are the trailblazers? IBM’s Watson we’ve mentioned, Facebook Messenger’s “M”, Amazon’s Alexa, the agent that lives in the Quartz news app and of course the numerous bots that will be hatched through Slack Bot Startups to name a few. Most of these interact through chat, be it text or voice, and when the AI isn’t feeling chatty it’s 电脑上搭梯子 at 2,500 year old board games. I remember configuring an AliceBot maybe 10 years ago and whilst at the time it felt like a scene from Bladerunner it was positively naive when compared to with complex behaviour on display today.
What caught my eye most recently however was Microsoft’s entry to the bot scene with Tay. Designed “to be entertainment”, Tay is a chat bot that pretends to be a 19-year old American girl, complete with acronym-heavy “text speak”, the ability to play games and a strong opinion on some pretty heavy thought experiments. Tay will be available through Kik, GroupMe, and Twitter initially and over time will learn new skills and presumably perform better at the Turing test.
v2ray加速器安卓下载-outline
On the surface Tay seems like a bit of fun, a tech giant flexing its R&D muscle. But the ramifications could be profound. Tay got me thinking, how will these bots evolve, and how will we as a society perceive them?
Messaging bots + services: the ultimate brand advocate is a celebrity. If your brand can develop their own AI celebrities they can exact fine grained control over their message, and worry less about post-club drunken photos of their current “face” appearing in Heat magazine.
The bots we’ve grown accustomed to in the last few years are agents: Siri, Cortana, Amy, Alexa and erm… “OK Google” (the latter lacking the necessary persona to really grow on us), they’re fairly passive in their approach. They act out on our requests, very rarely instigating something. I think this is where a big shift is about to occur, we’ll see more impetus to create original content from the agents and ultimately they will begin to define their own goals.
It seems likely to me that agencies could in fact craft and tune personas powered by these underlying AI bot engines (AIaaS please?) to become nothing short of celebrities, with millions of followers across the (people) social networks and a genuine human connection, within certain groups at least.
v2ray加速器安卓下载-outline
Well any media outlet for sure, if you want to disseminate a message you better have either a great story or a pretty face. Brands could engage with experts to craft their ultimate brand advocate, an entirely constructed celebrity. Infinitely scalable and international, the Celebribot might engage itself in real time media buying without the slightest of instructions, based on the agenda and campaign package currently being relayed to it. Hey if a mute Lara Croft can become a brand advocate for an energy drink, just think what could happen if she could talk, think, and plan for herself.
So this is where I think we are going with the new wave of bots. Can we look forward to manifestations of AI personalities hovering over us, dressed up drones, perhaps 电脑上搭梯子 from 2001: A Space Odyssey or maybe if we’re lucky something or someone more like Holly from Red Dwarf. Maybe I’ve been watching a little too much Black Mirror but it certainly looks like our engagement with these entities is about to see a pace change.
v2ray加速器安卓下载-outline
I’ve recently started a new job, yesterday was my birthday and in a few weeks baby number 2 will arrive. These kind of events or milestones often make people take a step back and think about their present direction. This morning I was out running, and I got thinking about work and life (read: personal time) and I wondered why I approach them so differently.
With work, or any business, I would never start without a plan, without a way to measure results, without distinct goals. With life, I, and many others I would propose, tend to either go through the motions, or deal with things in a more reactionary way. There’s no obvious focus or overarching direction at any one time. Why do we put so much effort into planning work, but perhaps not so much into planning “life”?
Finish something – I’d like to think of myself as a completionist, but the reality is I absolutely love starting things and too easily find something new to distract me. I currently have 6 Audible books all two-thirds through, and when it comes to games my Steam account or phone will testify to this point. It may also explain the big sack of unused clay in the garage.
Be more present* – In the 70’s Alvin Toffler popularised the term Information Overload. I’m sure today (at 87) things are a lot worse than he ever thought possible. The cognitive inbox fills up faster than you can clear it. Constant notifications, meandering through social networks. For me this can lead to me never quite being “in the moment” and is something I actively try to combat.
Family – A simple one, spend more time with the wife, kid(s) and other close or even distant family. Make a couple of journeys even if it’s only a “flying visit”.
Health and Wellbeing – Since my early 20’s I’ve tried to keep physically fit with running and the gym, sometimes even eating healthy, but now I’m in my early 30’s you start to notice you have to pay it back that little bit more, and if you ignore your own physical and mental health you can quickly find yourself swimming against the tide.
Go with the flow – The free card. Take a break from the process, let things just happen.
挂梯子上外网教程 – It’s too easy to stay busy. Make time for someone.
Early to bed* – I am 100% guilty of staying up too late, almost every night. This habit never used to be a problem, before children.
The idea is simple. I write these on post it notes, put them on a dartboard and throw a dart once a week. The aim is to try something different each week, so there’s an element of skill, but ultimately it really doesn’t matter which I hit. The other thing to note is that these would not be my exclusive focus each week, but to be mindful especially of this one big thing each day for that week.
There’s really no reason; you could roll a dice or simply pick of your own accord, but turning something into habit or better yet a bit of fun in my experience is a much better way to get someone to stick to something. A nice side-effect of using this technique is that you’re always going to improve at something, if it’s not the objective you intended, at least you’ll be getting better at darts. 🙂
I’ll see how it goes and maybe look more at the macro scale later on. If you have any similar techniques or thoughts about this topic of concerted focus I’d love to hear them.
Recently Usborne books made their beautifully illustrated 1980’s computing books for kids available for download. It turns out several of my friends and Twitter acquaintances picked up their love of coding from these books as youngsters, myself included.
I owe my career to these books. I first learned BASIC from this one back in 1993: http://t.co/a9SwJxLTrc http://t.co/cdinNZi47j
— Nick Lockwood (@nicklockwood) February 7, 2016
I remember being in a dentist’s waiting room where an old battered copy of “Computer Space Games” lay on the bookshelf. I was so engrossed they actually let me take that book home, and thus began my journey.
As an aside, today I’m a father of one (soon to be two), who absolutely loves Usborne’s latest “That’s not my…[Insert Subject]” touchy-feely book series. The pages of each book contain the phrase “That’s not my…” and the subject, which ranges from “Monkey” to “Snowman”. In some ways Usborne is continuing their logical thinking teachings with each page providing a condition that evaluates as true or false 😉 I highly recommend these for anyone with a young toddler.
Memory Lane
Flicking through these old computing books had me inadvertently taking a trip down memory lane. I didn’t have a computer for some time after I started “coding” (writing down programs in BBC Basic) but that just made it all the more enticing as one day I’d be able to see these programs crash run for real. The problem I had with my BBC Basic skills was that the BBC Micro was already a relic when I was a young teen, however I did eventually get an Amiga 600 on which I learned the programming language Amiga E (closely related to C). Later getting a Gateway PC which had Windows 95, a Cyrix 5×86 CPU (Intel was expensive!), a 56k modem, CD-Rom, VGA graphics card and a bucket load of power.
In those days kids like me hung around IRC, where after dinner I’d spend time chatting with quite a few “leet d00dz”. In these circles I came across a fantastic range of things: from mIRC-script and Sub7, to pc端梯子怎么搭 and 手机怎么搭梯子到外网 (ASM). ASM is something I would prompt any young coder to at least get some experience with. It may be all but useless these days, with even the most throw-away chips happily run the voluminous instructions output by much higher-level languages. The main thing you learn from ASM is the fundamentals of how a computer’s brain takes your instructions and uses a much more limited set of constructs and variables (registers) to do anything. Ultimately as a kid this was the thing that sold me on computers, they can do 挂梯子上外网教程 and all you needed was your brain and some time to create that anything.
Coding through necessity
When I was 15 or 16 we still used dialup modems to access the net. I think it cost something like 2p (£0.02 GBP) a minute to dial up, and during that time no-one could use the phone. It also made a racket so there was no sneaking online. We didn’t have a lot of money, so my internet time was limited to 30 minutes a day. So like a boy scout, in order to learn you had to be prepared. I ended up writing a Visual Basic app to spider and scrape sites, saving the pages to disk. This way I could dial up, have it scrape a bunch of sites to 3 levels deep and disconnect, reading at my leisure.
In chemistry class we were given homework of balancing symbol equations, hundreds of these things to work through. They aren’t hard, really it’s just just grunt work to apply some basic rules. As I later found out, it’s a core tenet of a coder to be lazy and never to repeat the same task more than once. So I wrote another little VB app which let you press buttons to input the elements, the numbers of units e.g. O² and hit go. I sold this program on floppy disk for £1 a pop to classmates, and the homework problem was solved.
With hindsight the above are early examples of situations where coding solved a real world problem for me personally, and I suspect that might be the case for a few of you reading. I also wonder if the huge amount and instant availability of free content gets in the way of this desire to create, but I like to think that this desire is universal.
pc端梯子怎么搭
At school we learned Pascal (and Delphi), a little Prolog, and for a final project we had an open choice (I opted for Visual C++ with MFC and Crystal Reports, so practical). We were also taught to finger trace which I believe helps to minimise common typos in later years. From there I started to do “real work” with ActionScript (for my sins, 10 years as a Flash developer), JavaScript (web and later nodejs), some Coldfusion and ASP.NET, some iOS projects in Objective-C and for the large part my days have been spent in Java (Android) in recent years. If you’re familiar with the 99 Bottles of Beer website you’ll know there are hundreds and hundreds of programming languages. The other day I was wondering whether those 10 years of Flash and Flex and the vast amounts of time, perhaps some 5000 hours, learning the ins and outs of a huge enterprise SDK was time that has been quite simply, lost. The thing I have learned though is that it doesn’t really matter what languages you’ve touched on over the years, it’s never a step backwards. ActionScript was based on ECMAScript 262 (as is JavaScript) and eventually evolved into something like Harmony-meets-Java. The thing is I learned from this was how to use a dynamically typed language, how to architect apps with (Pure)MVC, how to write testable code. It’s almost never time lost, well, maybe there are some exceptions.
That was my story in a nutshell and time passed has me missing a lot out no doubt, but what did your journey look like? What were the key moments that made an impact on you, what you learned, and why?
Are You OK? App
I’ve just published a companion site for my free app Are You OK?.
The app is aimed at people wishing to regularly check the status of family or friends who may for example live alone and are vulnerable to accidents like a fall in their home, unable to call for help. Something like the reverse of a panic button system; if they don’t press a button every few hours, it sends an SMS message to selected contacts with a call to check in.
求一个电脑上能用的梯子 to read more about the app and find the download link.
Fragments and Activities in Android Apps
UPDATE: 5 years later this post is pretty out of date. Some of it still holds, but it is now possible to better architect primarily “single Activity” apps, especially with the advent of the android Navigation component. For posterity the post below remains…
When asking “should I use a Fragment or Activity?” it’s not always immediately obvious on how you should architect an app.
My advice is try to avoid a single “god” Activity (h/t Eric Burke) that manages navigation between tens of Fragments – it may seem to give you good control over transitions, but it gets messy quickly*.
My go to is always to use a combination of Activities and Fragments. So here are some tips:
If it’s a distinct part of an app (News, Settings, Write Post), use a new Activity. This Activity may be fairly light-weight, simply inflating a Fragment in its layout XML or in code.
For everything else use Fragments.
This gives you flexibility when combining Fragments in Activity layouts for tablet.
Create a BaseActivity class which handles setup/styling of ActionBar and SlidingDrawerLayout if you have that kind of navigation.
Fragments don’t need to be visual, an Activity can use the FragmentManager to create a persistent headless Fragment with setRetainInstance() who’s job may be to perform a background task (update, upload, refresh) – this means the user can rotate the device without destroying and recreating the Fragment, and is sometimes and alternative to binding to a Service onResume().
Some good sources for how to architect apps, as always the Google I/O Schedule app:
http://github.com/google/iosched
and Eric Burke’s 2012 talk, around half-way through:
When dealing with deeper hierarchies, and with navigational requests that come from a user action within a Fragment.
When you need the ActionBar to be in overlay mode (for a full screen experience) but only in certain screens.
When you need to create new tasks (either shooting off to another app and back, or allowing other apps to start Activities in your app to do something like with a Share action)
There are many more, please feel free to add some in the comments if you can think of any.
Load Testing Live Streaming Servers
There are two types of test I’ll describe below. First of all using Apple HLS streams, which is HTTP Live Streaming via port 80, supported by iOS and Safari, and also by Android (apps and browser). Then we have Adobe’s RTMP over port 1935, mostly used by Flash players on desktop, this covers browsers like Internet Explorer and Chrome on desktop. These tests apply to Wowza server but I think it’ll also cover Adobe Media Server.
All links to files and software mentioned are duplicated at the end of this post.
It’s worth noting that you can stick to HLS entirely by using an HLS plugin for Flash video players such as this one, and that is what we’re doing in order to make good use of Amazon’s CloudFront CDN.
In this test we want to load test a Wowza origin server itself to see the direct effect of a lot of users on CPU load and RAM usage. This test is performed with Flazr, via RTMP on port 1935.
Assuming you’ve set up your Wowza or Adobe Media server already, for example by using a pre-built Wowza Amazon EC2 AMI. We’re using an m3.xlarge instance for this test as it has high network availabilty and a tonne of RAM – and we’re streaming 4 unique 720p ~4Mbit streams to it, transcoded to multiple SD and HD outputs (CPU use from this alone is up to 80%).
The order of parameters does seem to matter in later versions of flazr, but either way this test runs for 60 seconds, with a load of 1000 viewers. Given all the transcoding our CPU was already feeling the pain, but there was no sign of trouble. We managed 4500 before anything started to stutter in our test player from another m3.xlarge instance.
Onto HLS streaming, the standard for mobile apps and sites. We have used Wowza CloudFront Formations to set up HLS caching for content delivery, so that we can handle a very large number of viewers without impacting on the CPU load or network throughput of the origin server, and to giver us greater redundancy. Given CloudFront works with HLS streams we are not using RTMP for this test, so we cannot use Flazr again. To test HLS consumption –that being the continuous download of .m3u8 files and their linked .ts video chunks– we can use a tool called hlsprobe, which written in python.
If you’re on a Mac and don’t have python I recommend you install it via brew to get up and running quickly. If you don’t have brew, get it here.
#on a mac
brew install python
#on ubuntu/amazon
sudo apt-get python
It’s good to be able to simulate live streams at any time, either from your computer or in my case, from some EC2 instances. To do this I’ve written a simple nodejs script which loops a video, optionally transcoding as you go. I recommend against that due to high CPU use and therefore frame-loss; in my sample script I am passing through video and audio directly, the video is already using the correct codecs, frame size and bitrate via Handbrake.
Edit the js script to point to your server, port, and video file, the run the script with:
node fakestream.js
If the video completes, it’ll restart the stream but there will be a second of downtime, some video players automatically retry, but make sure your video is long enough for the test to be safe.
These are just a couple of ways of load testing a live streaming server, there are 3rd parties out there but we’ve not had great success so far, and this way you have a lot more control over the test environments.
Links
fakestream.js – NodeJS script to simulate live streams
config.yaml – Sample config for hlsprobe
hlsprobe – Tool for testing HLS streams
Flazr – Tool for testing RTMP streams
OSMF-HLS – OSMF HLS Plugin to support HLS in Flash video players
Postman Collection to HTML (node script)
If you use the excellent Postman for testing and developing your APIs (and if you don’t yet, please give it a try!) you may find this little node script helpful when generating documentation.
It simply converts your downloaded Postman collection file to HTML (with tables) for inserting into documentation or sharing with a 3rd party developer. The Postman collection is perfect for sharing with developers as it remains close to “live documentation”, but sometimes you need a more readable form.
I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the informati…
I’ve recently finished work on an app that registers itself as a handler for a given file extension, let’s call it “.mytype”, so if the user attempts to open a file named “file1.mytype” our app would launch and receive an Intent containing the information on the file’s location and its data can be imported. Specifically I wanted this to happen when the user opened an email attachment, as data is shared between users via email attachment for this app.
There are many pitfalls to doing this, and the Stack Overflow answers I saw given for the question had various side-effects or problems. The most common was that your app would appear in the chooser dialog whenever the user clicked on an email notification, for any email – not just those with your attachment. After some trial and error, I came up with this method.
Now something to note here, I’ve specified a filter for both “application/mytype” mimetype and also the more generic “application/octet-stream” mime type. The reason for this is because we can’t guarantee the attachment’s mime-type has been set correctly. We have iOS users and Android users sharing timers via email, and with iOS the mime type is set, with Android, at least in my tests on Android 4.2, the mime-type reverts to application/octet-stream for attachments sent from within the app.
Permissions
I initially put these IntentFilters on the “home” Activity of my app, however I soon started encountering security exceptions in LogCat detailing how my Activity didn’t have access to the data from the other process (Gmail). I realised this was because my Activity’s tag had the launch mode set to:
android:launchMode="singleTask"
Which prevents multiple instances of it being launched, this is important when users can launch the app from either the launcher icon or in this case via attachment (I didn’t want to have multiple instances of my home Activity running as that would confuse the user). So the solution was simply to create a new “ImportDataActivity” that handled the data import from the attachment, and then launched the home Activity with the Intent.FLAG_ACTIVITY_CLEAR_TOP flag added.
求一个电脑上能用的梯子
So in ImportDataActivity we need to import the data stored in the attachment, in my case this was JSON. The following shows how you might go about doing this:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Uri data = getIntent().getData();
if(data!=null) {
getIntent().setData(null);
try {
importData(data);
} catch (Exception e) {
// warn user about bad data here
finish();
return;
}
// launch home Activity (with FLAG_ACTIVITY_CLEAR_TOP) here…
}
private void importData(Uri data) {
final String scheme = data.getScheme();
if(ContentResolver.SCHEME_CONTENT.equals(scheme)) {
try {
ContentResolver cr = context.getContentResolver();
InputStream is = cr.openInputStream(data);
if(is == null) return;
StringBuffer buf = new StringBuffer();
BufferedReader reader = new BufferedReader(new InputStreamReader(is));
String str;
if (is!=null) {
while ((str = reader.readLine()) != null) {
buf.append(str + "\n" );
}
}
is.close();
JSONObject json = new JSONObject(buf.toString());
// perform your data import here…
}
}
That’s all that’s needed to register-for, and read data from custom file-types.
求一个电脑上能用的梯子
Now how about sending an email with a custom attachment. Here’s a sample of how you might do that:
String recipient = "",
subject = "Sharing example",
message = "";
final Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND);
emailIntent.setType("message/rfc822");
emailIntent.putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{recipient});
emailIntent.putExtra(android.content.Intent.EXTRA_SUBJECT, subject);
emailIntent.putExtra(android.content.Intent.EXTRA_TEXT, message);
// create attachment
String filename = "example.mytype";
File file = new File(getExternalCacheDir(), filename);
FileOutputStream fos = new FileOutputStream(file);
byte[] bytes = json.toString().getBytes();
fos.write(bytes);
fos.close();
if (!file.exists() || !file.canRead()) {
Toast.makeText(this, "Problem creating attachment",
Toast.LENGTH_SHORT).show();
return;
}
Uri uri = Uri.parse("file://" + file.getAbsolutePath());
emailIntent.putExtra(Intent.EXTRA_STREAM, uri);
startActivityForResult(Intent.createChooser(emailIntent,
"Email custom data using..."),
REQUEST_SHARE_DATA);
Please note that “REQUEST_SHARE_DATA” is just an static int const in the class, used in onActivityResult() when the user returns from sending the email. This code will prompt the user to select an email client if they have multiple apps installed.
As always, please do point out any inaccuracies or improvements in the comments.