Are tags a thing of the past?

Are <meta> tags a thing of the past?

Well…yes, and no! This article will explore which <meta tags> help and which hinder and are obsolete.

Traditionally, meta tags used to be placed in the head section of a webpage, hidden away from the public, they were used to help bots find out about your website contents. An example of such is given below.

With advances in search engine optimisation technology (SEO) the importance of meta tags has been superseded by the relevance of the page contents, rather than what the developer thought were the best keywords. Often, a website could receive a higher ‘click-rate’ than it should because the developer included popular keywords in the meta tags. If you wanted to drive traffic in ‘the good old days’ you would simply include keywords popular at the time – even if they had absolutely nothing to do with the site content! Wack “Spice Girls” in as a meta tag and you would get a higher ranking. This eventually became a real issue, with genuine websites ranking lower than those with irrelevant keywords. The search engines soon developed better algorithms that searched keywords within the page and marked up with – <h1><h2><section><article><strong><em><id=> -etc. In fact, it could now be argued that keywords hinder your ranking by restricting what people are actually typing to find your site. It can also have a detrimental impact on your website if your rivals can see the keywords you are using to try to attract customers.

<meta name=”description”> however, can be a very useful tag. In using the meta description tag, you can directly control what is displayed in the listing for each page displayed in the search engine results. Without this tag, the search engine will return what it thinks is a suitable description of your site.

You don’t need to restrict this tag to just a site description either, you could include phone and contact details so they are shown without the need to access your page. This can be very useful for a GP surgery were someone just quickly needs to find the phone number to book an appointment. It is also the quickest way to impact on the searcher. It is an easy way to tell prospective clients (in less than 155 characters) what you are about before they click. If your description better fits their need than a competitors description, you are more likely to get the clickthrough. So the description tag is not mandatory, but it can help you control the description of the site and potentially drive traffic.

The robots tag is invaluable if you want to “hide” certain pages or content from public search engines by telling the bot what not to crawl and what to crawl. You may have an employee or student area that you do not wish to include in a public search for instance. In which case you would just tell the robot to ignore those pages with a <meta name=”robots” content=”no index, no follow” /> in the header of the page you want to hide.

The Open Graph (OG) meta tag enables a web site/page to “become a rich object in a social graph”. In other words, sites like Facebook will use the OG tags to better understand the site content and display images and information about it. These tags are crucial if you want to control how your site looks when shared on social media. Although this may seem regressive- having to rely on meta tags again – they are based on RDF protocols and, if used, have 4 mandatory tags. If we use IMDB as an example, the following then is a list of required tags to create an open graph object =

  1. og:title – The title of your object as it should appear within the graph, e.g., “The Rock”.
  2. og:type – The type of your object, e.g., “video.movie”. Depending on the type you specify, other properties may also be required.
  3. og:image – An image URL which should represent your object within the graph.
  4. og:url – The canonical URL of your object that will be used as its permanent ID in the graph, e.g., “http://www.imdb.com/title/tt0117500/”.

This would translate in HTML as

<html prefix="og: http://ogp.me/ns#">
<head>
<title>The Rock (1996)</title>
<meta property="og:title" content="The Rock" />
<meta property="og:type" content="video.movie" />
<meta property="og:url" content="http://www.imdb.com/title/tt0117500/" />
<meta property="og:image" content="http://ia.media-imdb.com/images/rock.jpg" />
...
</head>
...
</html>

So then, in conclusion, it can be said that with the exception of the description meta tag – meta tags are dead? I would reluctantly answer – “if the purpose is to aid SEO, then yes. The meta tag is dead. However, if you want to control its description in a search engine, or the way it displays on social media then you still need to use them. OK, so the OG meta tag is not a meta tag in the traditional sense, it still needs to be placed in the header. This is something WordPress will not allow without the use of Plugins and trickery – but please, do NOT get me started on WordPress! WordPress is to Web Dev what “nighthawks” are to archaeologists!

So if you want great SEO, write well-formed code utilising the power of HTML5 with a sprinkling of meaningful <meta tags> that described its content, directs bots and creates Open Graphs for social media inclusion.

#mftLearnToCode #ttrLearnToCode https://genericweb.co.uk/

Why do we need tutors for online courses?

I work as a tutor for an online training provider and we are often asked, “what is the point of a tutor if they are working online?” In this article, I aim to answer that question. Obviously, it is futile to imply that we are necessary for a student’s success in passing their exams, but I do believe we do play a valuable part in their success and satisfaction. It has certainly been my experience that the greater the interaction with the tutor, the greater the chances the student will have a better experience with the course and, therefore, more likely to pass the first time. Where an online course can often fail, is in contextualising the content and making it relevant to the student’s experience and prior knowledge. By getting to know the student, even if it is through a few emails, we can really help them relate their learning to real-world examples and contextualise the technologies they encounter, both in the course and in the real world.

It is also important to give assurances and confirmation of the students learning journey. Creating confidence in the student’s ability to learn is a crucial foundation for building trust between the student and the information they are learning from the course. The sterile environment of non-contextualised learning online can really be enhanced by human interaction and affirmation. Without, the student can feel alone and unsure they are actually learning. Uncertainty and isolation can lead to failure to finish the course and create an adverse learning journey they will put them off future learning. It doesn’t have to be an isolated negative learning journey, it can be turned around by tutor support.

I could cite hundreds of articles that posit we tend to learn in different ways and in different styles – visual, aural, verbal, physical, logical, social and solitary. Granted, online learning covers much of that, and video tutorials can also cover many of those styles that text-based learning cannot. I would like to add an eighth style – contextualised. The majority of the exams for CompTIA, CIW, and Microsoft are based on the technologies they cover and how to implement them correctly as a solution to a problem. They are not about learning the meanings of acronyms and answering verbatim what they have read. If you are unfamiliar with the technologies you are learning, it is very difficult to contextualise why and when you would employ certain technologies with other technologies to solve the problem. In learning what an acronym can do and why we would use it, the meaning will become apparent in a multiple-choice question anyway. Let’s face it, how many “professionals” remember the exact meaning of all those acronyms they use daily? But, we all know how to use them and when to use them.

To “contextualize something [is] to consider something in relation to the situation in which it happens or exists (Oxford Learners Dictionary).

I have taught all age groups and abilities during my time as lecturer, teacher and now a tutor and I can safely say the most challenging and yet rewarding part was contextualising the learning journey. For me, code is a prime example. Every student wants to dive head first into code and get things going by building the next breakthrough app. Some students can just do this, but most cannot and will falter because they do not understand the theory, structure, and context. Every program is based on an algorithm of some sort. Even if you do not take the time to build an algorithm, one can be applied to the code. I think it essential that students at least understand the principals of algorithms and structure before they code. The program needs reason and it needs context or you just start building unstructured code. If you understand the basics of an algorithm, you understand the blocks and the separate functions/components needed to construct well-formed code. A tutor can help with that. A tutor can feedback best practice, context, and industry trends. A Boolean eLearning platform cannot.

Contextualised learning is not just about placing the technologies to help the student better understand. It is also about how you relate the learning to the individual student, so they can better learn. The very nature of eLearning means we can have students from any background. A generic eLearning platform cannot, by its Boolean nature, explain all things to all people. A tutor, however, can have a blinking good try! Trying to break down the learning journey into a voyage the student can understand and follow is invaluable, and I would argue, only achievable by contextualised learning from a tutor.

—————————————–

I hope this article has gone some way in helping you understand the importance of contextualised learning. If it has…please LIKE, SHARE or FEEDBACK the post. Thank you.

About the Author, – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real industry-ready skills that matter at The Training Room.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Visit his Blog and Website

Read more about Dr Richard Haddlesey BSc MSc PGCE PhD

Teaching Python vs HTML

So, I was talking to a colleague the other day and he was asking me about my experience teaching Python at secondary schools, mainly – “is Python forgiving”? My simple answer was – No! Python is not forgiving at all in comparison to HTML.

I now primarily teach the web trifecta – HTML, CSS and JavaScript and the question came about because of the forgiving nature of HTML. Students can submit some shockingly ill-formed code that will still display in a browser.

For instance –

<p>hello world</p>

will display “hello world” in a browser. Yes, it works, but it is far from well-formed or best practice, such as –

<!DOCTYPE html>

<html>

<head>

<title>better code</title>

</head>

<body>

<h1>Hello, World!</h1>

</body>

</html>

Python, on the other hand, is far more pedantic about its syntax rules. Often a simple misplaced / or : or “ instead of ‘ will render the code useless. I would spend hours of teaching code locked in debugging to find where a student had forgotten that colon! Don’t get me wrong, I love debugging – no, I really love debugging, and I am good at it because I enjoy it. However, debugging code because a student cannot copy from a book without dropping a colon is not so much fun.

If I am to compare the two languages in a teaching environment, teaching Python is probably easier. Why, you ask? Well, because-

·        it has to be right to work!

·        It has to be indented

·        It has to be well-formed

·        It has to be structured

·        It has rules that must be followed

·        The syntax is exacting

Sure, this is harder to teach – forget “free will” and “conform” your code, can stump the more creative students, but you are industry ready if you learn it right. Good old slap happy HTML will often work no matter how badly the code is written or formed. This makes it harder to teach because they know they can get away with –

·        A lack of structure

·        No indents

·        Ill-formed code

·        Missing syntax

·        A real “DIY” mentality

Therefore, teaching HTML that works is very easy – teaching HTML that is industry ready is really not!

“Well my webpage works doesn’t it sir?”

“Well…erm…yes, but it is a mess and the code are all over the place and is very hard to follow”

“Whatever, sir…it works and that’s good enough for me, so the heck with it!”

OK, so back to the anecdote – Python.

After getting so frustrated with putting colons and the alike into student’s code, I started playing a game. We had loads of old keyboards at school so I got a load and removed the .,?;:”’ etc from them. If a student asked me why their code wasn’t working, I would look. If it was logic I would explain and work with them on it. If it was syntax, I would go to my desk and pick up the corresponding marked key and place it on their desk. Obviously, if the key they were given was a colon, they knew they had to look for the missing colon in their code. This would encourage them to look for syntax errors themselves, rather than have the embarrassment of receiving a broken keyboard key placed on their desk.

Gosh…I miss teaching Python. However, I love teaching the web trifecta here at The Training Room.

No matter what language you learn – learn to code!

#ttrLearnToCode #ttrIT

The Evolution of the Computer Science GCSE

During the 1980s, computer studies and computers were in their infancy[1]. The BBC Microcomputer was the only real choice for schools at the time. This early PC had very little in the way of end-user applications and relied on a BASIC interpreter to be loaded which meant the user needed to learn to program and build their own applications[2]. This resulted in schools focussing on teaching how to program a computer alongside how the computer works. As computers became more popular and more applications became available, the focus on teaching switched to how to use a computer and its applications and ICT was born during the 1990s[3].

During the 2000s, it was becoming clear that ICT was no longer fit for purpose and that students were leaving school with skills in digital literacy, but not in computing. ICT began to receive negative reports from industry, educators and students as it was seen as a boring and repetitive subject that only taught how to use Microsoft Office[4].

It was not until 2010 that The Royal Society, based on information from the Computing At School group (CAS), Ofsted, Microsoft and Google (among others), set up an Advisory Group Chaired by Professor Steve Furber FRS[5]. The reports first recommendation was to stop using the acronym ICT because of its “negative connotations” as quoted below.

  • “Recommendation 1 The term ICT as a brand should be reviewed and the possibility considered of disaggregating this into clearly defined areas such as digital literacy, Information Technology, and Computer Science. There is an analogy here with how English is structured at school, with reading and writing (basic literacy), English Language (how the language works) and English Literature (how it is used). The term ‘ICT’ should no longer be used as it has attracted too many negative connotations”[6].

Aside from the name ICT, it was becoming clear that the “current delivery of Computing education in many UK schools is [sic] highly unsatisfactory” and needed addressing[7]. Indeed, even the UK Education Sectary at the time, Michael Gove (May 2010 to July 2014), was quoted as saying the ICT curriculum was “demotivating and dull”[8]. This was brought into the headlines by the executive chairman of Google, Eric Schmidt, when he addressed the Edinburgh TV festival in 2011 saying,

  • “I was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Your IT curriculum focuses on teaching how to use software but gives no insight into how it’s made. That is just throwing away your great computing heritage”[9].

As a result of growing pressure from industry, Michael Gove reported the UK Government would replace ICT with a new Computer Science curriculum from September 2012 (the start of the UKs academic year). In that speech, Gove posited,

  • “Imagine the dramatic change which could be possible in just a few years, once we remove the roadblock of the existing ICT curriculum. Instead of children bored out of their minds being taught how to use Word or Excel by bored teachers, we could have 11-year-olds able to write simple 2D computer animations”[10].

Visit my Interactive CV

Bibliography

[1] Doyle, GCSE Computer Studies for You.

[2] Brown et al., ‘Restart’.

[3] Ibid.

[4] Coquin, ‘IT & Telecoms Insights 2008: Assessment of Current Provision’.

[5] Furber and et al, ‘Shut down or Restart?’, 12.

[6] Ibid., 18.

[7] Ibid., 5.

[8] Burns, ‘School ICT to Be Replaced by Computer Science Programme’.

[9] Schmidt, ‘Edinburgh TV Festival’.

[10] Burns, ‘School ICT to Be Replaced by Computer Science Programme’.

No more ICT…….Please!

It was not until 2010 that The Royal Society, based on information from Ofsted and Microsoft (among others), set up an Advisory Group, Chaired by Professor Steve Furber FRS . The reports first recommendation was to stop using the acronym ICT, yet here we are – 7 years later – still using IT! Why? #ttrLearnToCode

There is more to providing learning for the I.T. industry than just teaching code!

As IT trainers, should we be installing and promoting “good practice” and “ethics” alongside the coding and theory?

With the absence of a professional regulatory body in I.T. and web development, it is up to us to self-regulate. In doing so, it is essential that we pass on ethics and good practice to the next generation of developers and coders. It is not enough to just teach “good code” and computational thinking, we must provide the wisdom and morals to allow our students to implement their code in an ethical way.

Hacking, espionage, directed advertisements, ransomware, cyber-terrorism, fake news, fraud, spam, SQL injections, sexting, legacy content, the right to be forgotten -etc. are the headline threats to the future of our “on-line” world. However, beyond the obvious are the underlying ethics that affect our daily interactions in the digital age. It is crucial, I believe, that we create a culture of “best practice” within the IT industry to maintain our integrity and elicit trust from our clients and the wider public.

So, if I am not talking about the headline threats to online and digital ethics, what am I talking about?

I am referring to the need for standards and collaboration across the industry. The simple things that make life easier for us all:

  • Indenting your code so others can read your code
  • Commenting on your code so others can understand it
  • Personalising your code so others can’t plagiarise it
  • Make your code efficient and elegant to inspire others
  • Share code snippets with others so we can learn from your code and you from ours
  • Develop your code to be neutral of external influences (no politics, race, borders)

 

Let us now break these points down.

Indenting code

Indenting code has its advantages and disadvantages, but I will argue the positives far outweigh the negatives. Just as we use white space and paragraph in the written language to add emphasis and separate concepts, written code also benefits from this. Placing blocks of code separate from other blocks, or indented within a larger or parent block, helps others to read your code. Not only that, it makes it much easier for the developer themselves to isolate blocks of code when it comes to debugging or showcasing the code to the client or other team members. Different coding languages will have different levels of indentation built in – Python for example – whereas HTML will not enforce indentation (unless you are using a dedicated code editor or IDE). The W3C does go some way to highlight the need for indentation in their style guide by suggesting

·        Do not add blank lines without a reason.

·        For readability, add blank lines to separate large or logical code blocks.

·        For readability, add two spaces of indentation.

·        Do not use the tab key.

·        Do not use unnecessary blank lines and indentation. It is not necessary to indent every element.

Sublime Text 3 has a really innovative way of helping with indentation, it places gridlines that connect the levels of indent so you can visualise related blocks of code. This also helps with ensuring you properly </close> your elements. As mentioned before though, these are just “best practice” at the most, and are neither enforced nor necessary. This naturally causes ambiguity in code with various editors creating different indent depths, some automatically create indents (Dreamweaver etc) whilst others do not (Notepad++, Sublime Text, Brackets). The point I make here is, someone new to the coding environment using a free editor will not necessarily be aware of indenting. Does this make their code wrong? Does it stop it from working? Will it stop them from being paid? The answer is no! However, it may not endear them to their colleagues and will place them apart as “noobs” and will hopefully imply they are not certified developers – a theme we shall return to later.

<!– Commenting your code –>

I understand that commenting your code is more associated with teachers wanting their students to show their understanding when compiling code, but its use is far more important than that. If you are being paid to write code for a company, they will often own the intellectual property rights to the code. Therefore, they have a right to understand what parts of the code are doing. That aside, if you are working as part of a team, other members of your team will need to know what parts of the code are doing. I am certainly not suggesting you comment every line or element, but you should comment a block, function, iteration, concept, external file etc – for your own understanding and sanity if not that of others. I am not referring to putting in an <alt> text for an image – although clearly, that is good practice too – I am more concerned about professional ethics and good practice rather than semantics.

Personalising your code so others can’t plagiarise it so easily.

Ok, this may seem sneaky, but using another developers code as your own is far sneakier! It may also be prudent to add comments to code to try and catch those plagiarising your code, or using it for financial gain and infringing your intellectual property rights. Beyond commenting, you could also add a few “false” lines of code. I am all for sharing code or examining others code as a starting point or inspiration, but within that, you should <!–comment–> in a #reference to the original code and thus, credit the author.

 

Share code snippets with others so we can learn from your code and you from ours.Finally – join a forum! Learn, share, create, collaborate, ask, tell, say, question, expand. There are plenty of places to get involved. Stack Overflow is simply amazing and a must join for any budding or professional developer. Join “roughly 40 million developers who visit the site every month” and ask more than 8,000 questions a day! Another great resource is GitHub where you can share and collaborate.

I hope this article has gone some way in helping you understand the importance of Industry ethics and “good practice”. If it has…or hasn’t… please LIKESHARE or FEEDBACK the post. Thank you.

About the Author, – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real industry ready skills that matter at The Training Room.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Crossing the Digital Platforms in WebDev

HTML5, CSS3 and JavaScript undoubtedly make creating cross-platform websites, but it is still essential Web Developers/Designers thoroughly test their sites before making them “live”.

Most websites will have been designed and built on a P.C., MAC or laptop, but increasingly they are being viewed on tablets and mobiles. This is often overlooked. As a teacher of computer science, we would often ask students to login to a site, or interact with a site to do some homework etc. Increasingly, students would reply – “we couldn’t get the website to work on my mum’s tablet or my phone sir”. “Don’t you have a laptop or computer at home”? “No sir, we don’t need one”.

Granted, it tends to be the youth that prefers mobile devices, but in 2016 the amount of people accessing the web via mobiles and tablets surpassed users of laptops (hallaminternet). It is also fair to assume that not everyone wants to download and install another mobile app that they will barely use. Therefore, it is crucial that as web developers/designers, that we create responsive sites that transfer the content to any platform without the need for the user to resize the site’s content. Even if CSS3 is not your strength, there does exist plenty of responsive templates on the net. Some of these – TEMPLATED – are provided free of charge and royalty free (providing you reference them). Even as a “seasoned pro” basing code on pre-existing templates saves time and can be used to show the client quickly what to expect before you commit to the lengthy process of bespoke coding. I am certainly not suggesting we plagiarise, but if we use templates and comment on our code professionally, I see no harm. As a pro, you should be able to delve into the code and personalise and change the content anyway, it is just a useful starting point.

Anyway, I digress a little. Back to the main thread. I am hoping that – using HTML5, CSS3 and JS – we can move away from mobile specific content, and more towards a responsive page that will display the same content optimised to the device, it is viewed on. This is will help greatly with “branding” and provide a consistent experience for the user. It also means the developer just needs to design each page once and apply a CSS to it, rather than make several iterations of a page designed to display on different devices.

——————————————————————————————————-

I hope this article has gone some way in helping you understand the importance of UPDATES. If it has…please LIKESHARE or FEEDBACK the post. Thank you.

About the Author, – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real industry ready skills that matter at The Training Room.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Cloud Chaos?

As a student, I was always told – no forced – to create folders on my computer to store all documents pertaining to one course or project. All files, no matter the extension or type, go in a folder so they can all be found and all the links to files will work. The same was true for your desktop, keep it clean, keep it in folders. I am a huge fan of arranging all my work into folders and logical places on my physical drives. That often means I do have to use USB sticks to transfer work over to another device, but at least you can. So, for years now, I have been meticulously placing the right file in the right folder to enable myself to locate the said file with ease and to ensure if I copy a folder over -via memory stick – to another device, that all the “stuff” is there. Makes sense I hope?

Now, we have cloud storage, cloud backups, cloud software, cloud this and cloud that. Most major companies offer cloud storage or backup;

·        Microsoft OneDrive

·        Google Drive

·        Adobe Creative Cloud

·        GitHub

·        Kindle Cloud Reader

·        Dropbox

·        Sony Memories

·        Canon/Nikon for photos

·        Samsung

·        iCloud

The list goes on. Most of the list above create a folder on your physical drive that is “synced” to the “cloud”. So, the work you do on MS Word will be in a OneDrive folder, while the image you created will be in your Adobe Creative Cloud folder. More on this later, but I think we need to look at what the cloud is first?

The “cloud” is a physical location! It is not a cloud of data that just floats in space waiting for you to grab at it. The cloud will not rain data when it gets near a hilltop or the North of England. It is located on various servers at various server farms around the globe (depending on the size of the company that stores your cloud data). Those servers are constantly connected to the World Wide Web and the internet so that you can access your data anywhere at any time. The data is across several different farms, often floating back and forth (a cloud of data) to ensure the data is always backed up and accessible should a server fail. So then, the cloud is rather fantastic! You can work on any device, at any location, on any platform, using various software at any time – within reason and with some caveats. On top of this, you can rest assured you have a copy of a file backed up somewhere. Even if you delete a file, there is a greater chance of recovering it or at least finding an older version of it.

So, all this is great! So why the title “Cloud Chaos”? Well, if we think back to the folder scenario I have already eluded to, we now need to create a folder for each cloud provider rather than for each project. My simple logic brain now struggles to think – which cloud provider has what file in which subfolder for which project? For instance, I could be planning and designing a web page. I may write all the text in MS Word first, then save the work in my OneDrive so I can access the document on my phone too. Next, I create an image in Photoshop and save the image in my Adobe Creative Cloud folder. I may start my HTML code in Dreamweaver and save it too to the Adobe Creative Cloud, but my colleague needs to work on the code too, so I place the code on GitHub so we can share the code in real time without having to give them access to my Adobe account. So now I have different files in different cloud folders. This is amazing, that colleagues can share work and be assured they are working on the latest iteration, however, my poor dyslexic mind cannot cope easily with files in various locations rather than in just one location that I control. So, on one hand, our workflow has the potential to grow and the potential of collaboration is almost limitless (bandwidth and latency aside). This is, understandably why the cloud is so popular. Especially, as most cloud services come as an add-on to your software package. Or, in the case of Dropbox, are essentially free – unless you need larger storage through a business account. It really is quite amazing!

So, why the chaos? Well, simply because your files are now spread across various cloud providers and it is not until you bring it all together can it be stored in one location. So, if we take Dreamweaver as our example, we need to place all the associated files in a root folder so the website can link to all its assets relatively. All the image files will be in the default image folder, HTML in another, CSS and JavaScript in others. So, we need to take the original files out of the cloud folder and place them on our server so the links all work etc. For most people, this probably is not an issue, but for me – with dyslexia and short-term memory loss – it is a nightmare! Did I move the file? Where is the file? Am I working with the latest version? Have I put all the files in the right folder and deposited it all on the server? I agree, the cloud makes life more collaborative and intuitive for most, but it is a new way of working that does take time to adapt to. Clearly, I am slowly getting used to disparate folders and locations.

How do you manage all your files and folders? Do you have your files across several paths and folders? How do you collate and share your workflow?

——————————————————————————————————-

I hope this article has gone some way in helping you understand the importance of UPDATES. If it has…please LIKESHARE or FEEDBACK the post. Thank you.

About the Author, – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real industry ready skills that matter at The Training Room.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Visit his Blog and Website

Read more about Dr Richard Haddlesey BSc MSc PGCE PhD

I don’t wanna be Cyber Attacked – what can I do?

A question I often get asked is should I accept all these annoying updates from Microsoft.

YES! Is the short answer, and really the only answer, but many don’t or won’t?

OK, so there are caveats. If you only connect to your own network at home, never use the World Wide Web, never do anything that requires passwords and do not use internet banking, then no, you do not need to update. If you do, however, you really should!

In a recent whitepaper, DUO* suggested that over 65% of devices running Microsoft Windows are running the outdated Windows 7 released back in 2009. The operating system was withdrawn from support in January 2015 as it was too vulnerable to attack, although, Microsoft will still release minor security fixes until 2020. That, removal of support, was over two years ago, so if you are still running Windows 7 and accessing the World Wide Web, you are extremely vulnerable to attack! Microsoft even offered a free upgrade to the more robust Windows 10 for free, yet people still did not switch. I appreciate that people still like Windows XP and Windows 7, but they are just no longer safe or relevant in today’s online world. The software is changed and updated for a reason, it is not just to make money – after all, they gave it away for free – it is to plug the data holes and keep you safe online. This is not just a problem with out of date Windows devices, it is also the same for Android and iOS on Macs. Yes, the Mac is just as vulnerable to attacks**, it is just with about 7% of the P.C. market, you tend to hear less about it***.

  • If you continue to use Windows XP now that support has ended, your computer will still work but it might become more vulnerable to security risks and viruses. Internet Explorer 8 is also no longer supported, so if your Windows XP PC is connected to the Internet and you use Internet Explorer 8 to surf the web, you might be exposing your PC to additional threats. Also, as more software and hardware manufacturers continue to optimize for more recent versions of Windows, you can expect to encounter more apps and devices that do not work with Windows XP. —Microsoft

Now, if you take your “old” device into work to connect to their network, you are now making your entire company vulnerable to attack! Once you open a port from your device to the work intranet or Wi-Fi, you are giving attackers – via your outdated software – instant access to the network. Not only that, you are allowing a would-be attacker easy access to an otherwise secure business network. At the very least, everything you can access an attacker can also access. If they are sophisticated, they can potentially gain access to all the network. All this, just because you really like older versions of Windows! At my former place of work (a secondary school) a teacher brought in their old XP laptop and opened an email, they received from a person they did not know. Unwittingly, by opening that email on the school network, they introduced ransomware onto the network. This encrypted the entire school network and all drives. For nearly a week, the school network was unusable while the technicians worked to restore previous network backups. When the system was eventually restored, all the recent files people had been working on since the backup were lost. Obviously, the school did not pay any ransom, but only because they back up the system files twice a week; had they not have done – there would have been no way to restore the files without paying the ransom and getting the unlock code.

In the light of recent cyber attacks, in May 2017 –  Microsoft has come out and said this is a “wake-up call” and reiterates the need to install their security patches as, and when, they are released.

  • Ransomware is a type of malware that prevents or limits users from accessing their system, either by locking the system’s screen or by locking the users’ files unless a ransom is paid. More modern ransomware families, collectively categorized as crypto-ransomware, encrypt certain file types on infected systems and forces users to pay the ransom through certain online payment methods to get a decrypt key.https://www.trendmicro.co.uk/vinfo/uk/security/definition/ransomware

I am certainly not trying to imply that, had the user been using an updated version of Windows 10 that that would never have happened. Instead, I am trying to add to the discussion that the often overlooked threat to network security is internal human errors****. However, “User Behavioural Analytics” are beyond the scope of this discussion.

Summary

Keeping your system up to date with the latest security patches and software add-ons remains a highly important step in combating hackers.

In short —

INSTALL and UPDATE

  • Your Operating System
  • Your browser
  • Your browser add-ons
  • Anti-Virus software
  • Anti-Malware software
  • Anti-Spyware software
  • Firewall

·        Do NOT open unknown emails and attachments EVER!

Some people tend to think that if your device is set to download and install updates alongside a disk defragmentation automatically at the default time of 03:00AM, then that is enough to keep them safe if they turn their machine off before bed. Well,…are you saying you expect the device to wake up at 03:00 and turn itself on, connect – by itself – to the internet, download and install updates/patches/drivers/code then check your hard drive for errors – before turning itself off again and going back to sleep? I’m sorry but it doesn’t!


 

I hope this article has gone some way in helping you understand the importance of UPDATES. If it has…please LIKESHARE or FEEDBACK the post. Thank you.

About the Author, – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real skills that matter.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Visit his Blog and Website

Read more about Dr Richard Haddlesey BSc MSc PGCE PhD

Bibliography

*https://duo.com/resources/ebooks/the-2016-duo-trusted-access-report-microsoft-edition

**https://nakedsecurity.sophos.com/2016/09/02/patch-now-recent-ios-vulnerability-affects-macs-too/

***http://www.macworld.co.uk/how-to/mac-software/do-macs-get-viruses-do-macs-need-antivirus-software-3454926/

**** The Essential Guide to Behavior Analytics – www.balabit.com

A future skills gap in the wake of the new Computer Science GCSE?

As a former Secondary School Teacher, I was part of the government’s move away from traditional Information Communications Technology (ICT) toward Computer Science as a GCSE.

The change has been profound and caught many teachers off-guard. Many older teachers of ICT could not easily make the transition to teaching computer science. Why? Well because it is now a science! A science based on computational thinking and the logical creation and analysis of algorithms and coded solutions. In simplistic terms… it’s out with Microsoft Office and in with Python IDE!

Computer Science then is a completely different course to ICT. Obviously there exists some latent crossover, but for the most part, it is a much more relevant science/industry-based qualification compared to the more business based ICT course. Much of what was ICT is now only a small part of the E-commerce side of Comp Sci. It has moved from learning how to use software – such as MS Office – to create documents and websites. It is now much more about how to build apps, programs and e-portfolios alongside maintaining computer systems, networks and cyber-security. As such, breaking down a problem and planning a sequenced plan or algorithm is now fundamental to the “art” of computational thinking.

 

My experience of teaching both ICT and Computer Science has taught me that not all students are capable of Computational Thinking and understanding algorithms. Not all can think sequentially and logically, many can only process freeform, nonlinear thoughts and can make little sense of a computer that can only do what it is told, in a specific order using a specific structured language or code.

This leads the teacher to have to focus more on trying to teach the students how to create algorithms and flowcharts and of course coding. There does exist many high-quality educational aids for learning to code –

·        https://code.org/learn

·        https://scratch.mit.edu/starter_projects/

·        http://www.alice.org/index.php

·        https://www.codecademy.com/learn

·        https://www.kodugamelab.com/

·        https://codecombat.com/

Students, in my experience, find it difficult to code effectively because of the strict syntax. Although PYTHON is very forgiving, it is exacting in its syntax – in other words, if it expects a colon or comma, then it MUST have a colon or a comma! – but why? “Well, it just does” can placate some students, but frustrate others. Trying to get the students to code effectively takes up a lot of teaching time at the expense of much of the theory. Most of the time we had to rely on students doing the theory for homework, which inevitably, was 50/50 hit and miss with many students not bothering. The ability to create a working solution to a problem almost always forms the basis of at least one of their final Controlled Assessment’s in which the student must plan, code and test a solution efficiently with no guided help from their teacher or peers. Because this is crucial to a good final grade, it is obvious that teaching and learning how to code and troubleshoot code is a classroom priority.

So, you may ask, why am I writing this blog? Well, because I believe that there will continue to be a skills gap when our present and future cohorts of GCSE Computer Science students leave school. I am convinced that they will certainly better equipped than their ICT qualified peers, however, with too much time given over to learning Python I think they will be lacking solid industry skills. Don’t get me wrong; I think their learning Python, Computational Thinking and Algorithms are a massive step forward in the right direction. However, they often lack the ability to translate the learning of Python into other “C” based languages and HTML, SQL, JavaScript etc. No matter how hard we try to drill the students on the importance of planning and writing algorithms that were not retro-engineered, they always wanted to code first and then try to make up a plan to fit the program.

Any way I digress… I am not trying to push a solution – after all, there is no single solution – I am just pointing out my observations in order to try and start a discussion on the future of the industry and whether others have noticed a skills gap in GCSE students?

I hope this article has gone some way in helping start a discussion on possible future skills gaps. If it has…please LIKE, SHARE or FEEDBACK the post. Thank you.

About the Author – Dr Richard Haddlesey is the founder and Webmaster of English Medieval Architecture in which he gained a Ph.D. in 2010 and holds Qualified Teacher Status relating to I.C.T. and Computer Science. Richard is a professional Web Developer and Digital Archaeologist and holds several degrees relating to this. He is passionate about the dissemination of research and advancement of digital education and Continued Professional Development #CPD. Driven by a desire to better prepare students for industry, Richard left mainstream teaching to focus on a career in tutoring I.T. professionals with real skills that matter. Thus, catering more to the individual learner’s needs relevant to their career pathway than the National Curriculum taught in schools is presently capable of.

#ttrIT #ttrcareerinIT #ttrLearnToCode

Enjoy this blog? Please spread the word :)