DIG DEEPER TO FIND WATER & NOT WIDER -P.M.PATEL

Saturday, September 26, 2020

Domain Authority 50 for your website - Guaranteed Service

We`ll get your website to have Domain Authority 50 or we`ll refund you every
cent

for only 150 usd, you`ll have DA50 for your website, guaranteed

Order it today:
http://www.str8-creative.co/product/moz-da-seo-plan/

thanks
Alex Peters

Wednesday, September 23, 2020

Planet X3 - Review Of A New Real Time Strategy Game For The IBM PC


Title Screen VGA
Retro video game homebrew is an ever maturing market.  Talented coders spend a ton of hours getting their games into a playable state and bugfixed, small teams combine their talents to handle differing workloads (graphics, sound, programming) and the result is hopefully a video game that will sell enough copies to make it worth all the effort.  Homebrew software has become popular with console platforms like the NES, Atari 2600, ColecoVision, Intellivision and Sega Genesis.  Homebrew software for personal computers has not quite taken off as the more popular consoles.  Nonetheless there are talented individuals making homebrew software for the IBM PC compatible  MS-DOS platform.  Today I am going to review the latest homebrew game for the IBM PC and compatibles, 8-bit Guy's Planet X3, identify its strengths and weaknesses, determine how well it met its design goals and postulate on its role in the evolution of PC homebrew.

Read more »

Tuesday, September 22, 2020

HIDDEN GEMS - TOP 10 FIRST PERSON SHOOTERS OF THE '90S


We're doing something a little different today - a Top 10 video! Only, this being The Collection Chamber, it has to be about obscure hidden gems! Check out the video and let me know any that I've missed.

Some have already been featured on the site so follow the jump to link to their review.


Read more »

Thursday, September 17, 2020

1500 google maps citations cheap

Rank the google maps top 5 for your money keywords, guaranteed

http://www.str8-creative.io/product/1500-gmaps-citations/

regards,
Str8 Creative

Sunday, September 13, 2020

Tech Book Face Off: Data Smart Vs. Python Machine Learning

After reading a few books on data science and a little bit about machine learning, I felt it was time to round out my studies in these subjects with a couple more books. I was hoping to get some more exposure to implementing different machine learning algorithms as well as diving deeper into how to effectively use the different Python tools for machine learning, and these two books seemed to fit the bill. The first book with the upside-down face, Data Smart: Using Data Science to Transform Data Into Insight by John W. Foreman, looked like it would fulfill the former goal and do it all in Excel, oddly enough. The second book with the right side-up face, Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow by Sebastian Raschka and Vahid Mirjalili, promised to address the second goal. Let's see how these two books complement each other and move the reader toward a better understanding of machine learning.

Data Smart front coverVS.Python Machine Learning front cover

Data Smart

I must admit; I was somewhat hesitant to get this book. I was worried that presenting everything in Excel would be a bit too simple to really learn much about data science, but I needn't have been concerned. This book was an excellent read for multiple reasons, not least of which is that Foreman is a highly entertaining writer. His witty quips about everything from middle school dances to Target predicting teen pregnancies were a great motivator to keep me reading along, and more than once I caught myself chuckling out loud at an unexpectedly absurd reference.

It was refreshing to read a book about data science that didn't take itself seriously and added a bit of levity to an otherwise dry (interesting, but dry) subject. Even though it was lighthearted, the book was not a joke. It had an intensity to the material that was surprising given the medium through which it was presented. Spreadsheets turned out to be a great way to show how these algorithms are built up, and you can look through the columns and rows to see how each step of each calculation is performed. Conditional formatting helps guide understanding by highlighting outliers and important contrasts in the rows of data. Excel may not be the best choice for crunching hundreds of thousands of entries in an industrial-scale model, but for learning how those models actually work, I'm convinced that it was a worthy choice.

The book starts out with a little introduction that describes what you got yourself into and justifies the choice of Excel for those of us that were a bit leery. The first chapter gives a quick tour of the important parts of Excel that are going to be used throughout the book—a skim-worthy chapter. The first real chapter jumps into explaining how to build up a k-means cluster model for the highly critical task of grouping people on a middle school dance floor. Like most of the rest of the chapters, this one starts out easy, but ramps up the difficulty so that by the end we're clustering subscribers for email marketing with a dozen or so dimensions to the data.

Chapter 3 switches gears from an unsupervised to a supervised learning model with naïve Bayes for classifying tweets about Mandrill the product vs. the animal vs. the Mega Man X character. Here we can see how irreverent, but on-point Foreman is with his explanations:
Because naïve Bayes is often called "idiot's Bayes." As you'll see, you get to make lots of sloppy, idiotic assumptions about your data, and it still works! It's like the splatter-paint of AI models, and because it's so simple and easy to implement (it can be done in 50 lines of code), companies use it all the time for simple classification jobs.
Every chapter is like this and better. You never know what Foreman's going to say next, but you quickly expect it to be entertaining. Case in point, the next chapter is on optimization modeling using an example of, what else, commercial-scale orange juice mixing. It's just wild; you can't make this stuff up. Well, Foreman can make it up, it seems. The examples weren't just whimsical and funny, they were solid examples that built up throughout the chapter to show multiple levels of complexity for each model. I was constantly impressed with the instructional value of these examples, and how working through them really helped in understanding what to look for to improve the model and how to make it work.

After optimization came another dive into cluster analysis, but this time using network graphs to analyze wholesale wine purchasing data. This model was new to me, and a fascinating way to use graphs to figure out closely related nodes. The next chapter moved on to regression, both linear and non-linear varieties, and this happens to be the Target-pregnancy example. It was super interesting to see how to conform the purchasing data to a linear model and then run the regression on it to analyze the data. Foreman also had some good advice tucked away in this chapter on data vs. models:
You get more bang for your buck spending your time on selecting good data and features than models. For example, in the problem I outlined in this chapter, you'd be better served testing out possible new features like "customer ceased to buy lunch meat for fear of listeriosis" and making sure your training data was perfect than you would be testing out a neural net on your old training data.

Why? Because the phrase "garbage in, garbage out" has never been more applicable to any field than AI. No AI model is a miracle worker; it can't take terrible data and magically know how to use that data. So do your AI model a favor and give it the best and most creative features you can find.
As I've learned in the other data science books, so much of data analysis is about cleaning and munging the data. Running the model(s) doesn't take much time at all.
We're into chapter 7 now with ensemble models. This technique takes a bunch of simple, crappy models and improves their performance by putting them to a vote. The same pregnancy data was used from the last chapter, but with this different modeling approach, it's a new example. The next chapter introduces forecasting models by attempting to forecast sales for a new business in sword-smithing. This example was exceptionally good at showing the build-up from a simple exponential smoothing model to a trend-corrected model and then to a seasonally-corrected cyclic model all for forecasting sword sales.

The next chapter was on detecting outliers. In this case, the outliers were exceptionally good or exceptionally bad call center employees even though the bad employees didn't fall below any individual firing thresholds on their performance ratings. It was another excellent example to cap off a whole series of very well thought out and well executed examples. There was one more chapter on how to do some of these models in R, but I skipped it. I'm not interested in R, since I would just use Python, and this chapter seemed out of place with all the spreadsheet work in the rest of the book.

What else can I say? This book was awesome. Every example of every model was deep, involved, and appropriate for learning the ins and outs of that particular model. The writing was funny and engaging, and it was clear that Foreman put a ton of thought and energy into this book. I highly recommend it to anyone wanting to learn the inner workings of some of the standard data science models.

Python Machine Learning

This is a fairly long book, certainly longer than most books I've read recently, and a pretty thorough and detailed introduction to machine learning with Python. It's a melding of a couple other good books I've read, containing quite a few machine learning algorithms that are built up from scratch in Python a la Data Science from Scratch, and showing how to use the same algorithms with scikit-learn and TensorFlow a la the Python Data Science Handbook. The text is methodical and deliberate, describing each algorithm clearly and carefully, and giving precise explanations for how each algorithm is designed and what their trade-offs and shortcomings are.

As long as you're comfortable with linear algebraic notation, this book is a straightforward read. It's not exactly easy, but it never takes off into the stratosphere with the difficulty level. The authors also assume you already know Python, so they don't waste any time on the language, instead packing the book completely full of machine learning stuff. The shorter first chapter still does the introductory tour of what machine learning is and how to install the correct Python environment and libraries that will be used in the rest of the book. The next chapter kicks us off with our first algorithm, showing how to implement a perceptron classifier as a mathematical model, as Python code, and then using scikit-learn. This basic sequence is followed for most of the algorithms in the book, and it works well to smooth out the reader's understanding of each one. Model performance characteristics, training insights, and decisions about when to use the model are highlighted throughout the chapter.

Chapter 3 delves deeper into perceptrons by looking at different decision functions that can be used for the output of the perceptron model, and how they could be used for more things beyond just labeling each input with a specific class as described here:
In fact, there are many applications where we are not only interested in the predicted class labels, but where the estimation of the class-membership probability is particularly useful (the output of the sigmoid function prior to applying the threshold function). Logistic regression is used in weather forecasting, for example, not only to predict if it will rain on a particular day but also to report the chance of rain. Similarly, logistic regression can be used to predict the chance that a patient has a particular disease given certain symptoms, which is why logistic regression enjoys great popularity in the field of medicine.
The sigmoid function is a fundamental tool in machine learning, and it comes up again and again in the book. Midway through the chapter, they introduce three new algorithms: support vector machines (SVM), decision trees, and K-nearest neighbors. This is the first chapter where we see an odd organization of topics. It seems like the first part of the chapter really belonged with chapter 2, but including it here instead probably balanced chapter length better. Chapter length was quite even throughout the book, and there were several cases like this where topics were spliced and diced between chapters. It didn't hurt the flow much on a complete read-through, but it would likely make going back and finding things more difficult.

The next chapter switches gears and looks at how to generate good training sets with data preprocessing, and how to train a model effectively without overfitting using regularization. Regularization is a way to systematically penalize the model for assigning large weights that would lead to memorizing the training data during training. Another way to avoid overfitting is to use ensemble learning with a model like random forests, which are introduced in this chapter as well. The following chapter looks at how to do dimensionality reduction, both unsupervised with principal component analysis (PCA) and supervised with linear discriminant analysis (LDA).

Chapter 6 comes back to how to train your dragon…I mean model…by tuning the hyperparameters of the model. The hyperparameters are just the settings of the model, like what its decision function is or how fast its learning rate is. It's important during this tuning that you don't pick hyperparameters that are just best at identifying the test set, as the authors explain:
A better way of using the holdout method for model selection is to separate the data into three parts: a training set, a validation set, and a test set. The training set is used to fit the different models, and the performance on the validation set is then used for the model selection. The advantage of having a test set that the model hasn't seen before during the training and model selection steps is that we can obtain a less biased estimate of its ability to generalize to new data.
It seems odd that a separate test set isn't enough, but it's true. Training a machine isn't as simple as it looks. Anyway, the next chapter circles back to ensemble learning with a more detailed look at bagging and boosting. (Machine learning has such creative names for things, doesn't it?) I'll leave the explanations to the book and get on with the review, so the next chapter works through an extended example application to do sentiment analysis of IMDb movie reviews. It's kind of a neat trick, and it uses everything we've learned so far together in one model instead of piecemeal with little stub examples. Chapter 9 continues the example with a little web application for submitting new reviews to the model we trained in the previous chapter. The trained model will predict whether the submitted review is positive or negative. This chapter felt a bit out of place, but it was fine for showing how to use a model in a (semi-)real application.

Chapter 10 covers regression analysis in more depth with single and multiple linear and nonlinear regression. Some of this stuff has been seen in previous chapters, and indeed, the cross-referencing starts to get a bit annoying at this point. Every single time a topic comes up that's covered somewhere else, it gets a reference with the full section name attached. I'm not sure how I feel about this in general. It's nice to be reminded of things that you've read about hundreds of pages back and I've read books that are more confusing for not having done enough of this linking, but it does get tedious when the immediately preceding sections are referenced repeatedly. The next chapter is similar with a deeper look at unsupervised clustering algorithms. The new k-means algorithm is introduced, but it's compared against algorithms covered in chapter 3. This chapter also covers how we can decide if the number of clusters chosen is appropriate for the data, something that's not so easy for high-dimensional data.

Now that we're two-thirds of the way through the book, we come to the elephant in the machine learning room, the multilayer artificial neural network. These networks are built up from perceptrons with various activation functions:
However, logistic activation functions can be problematic if we have highly negative input since the output of the sigmoid function would be close to zero in this case. If the sigmoid function returns output that are close to zero, the neural network would learn very slowly and it becomes more likely that it gets trapped in the local minima during training. This is why people often prefer a hyperbolic tangent as an activation function in hidden layers.
And they're trained with various types of back-propagation. Chapter 12 shows how to implement neural networks from scratch, and chapter 13 shows how to do it with TensorFlow, where the network can end up running on the graphics card supercomputer inside your PC. Since TensorFlow is a complex beast, chapter 14 gets into the nitty gritty details of what all the pieces of code do for implementation of the handwritten digit identifier we saw in the last chapter. This is all very cool stuff, and after learning a bit about how to do the CUDA programming that's behind this library with CUDA by Example, I have a decent appreciation for what Google has done with making it as flexible, performant, and user-friendly as they can. It's not simple by any means, but it's as complex as it needs to be. Probably.

The last two chapters look at two more types of neural networks: the deep convolutional neural network (CNN) and the recurrent neural network (RNN). The CNN does the same hand-written digit classification as before, but of course does it better. The RNN is a network that's used for sequential and time-series data, and in this case, it was used in two examples. The first example was another implementation of the sentiment analyzer for IMDb movie reviews, and it ended up performing similarly to the regression classifier that we used back in chapter 8. The second example was for how to train an RNN with Shakespeare's Hamlet to generate similar text. It sounds cool, but frankly, it was pretty disappointing for the last example of the most complicated network in a machine learning book. It generated mostly garbage and was just a let-down at the end of the book.

Even though this book had a few issues, like tedious code duplication and explanations in places, the annoying cross-referencing, and the out-of-place chapter 9, it was a solid book on machine learning. I got a ton out of going through the implementations of each of the machine learning algorithms, and wherever the topics started to stray into more in-depth material, the authors provided references to the papers and textbooks that contained the necessary details. Python Machine Learning is a solid introductory text on the fundamental machine learning algorithms, both in how they work mathematically how they're implemented in Python, and how to use them with scikit-learn and TensorFlow.


Of these two books, Data Smart is a definite-read if you're at all interested in data science. It does a great job of showing how the basic data analysis algorithms work using the surprisingly effect method of laying out all of the calculations in spreadsheets, and doing it with good humor. Python Machine Learning is also worth a look if you want to delve into machine learning models, see how they would be implemented in Python, and learn how to use those same models effectively with scikit-learn and TensorFlow. It may not be the best book on the topic, but it's a solid entry and covers quite a lot of material thoroughly. I was happy with how it rounded out my knowledge of machine learning.

Movie Reviews: A Star Is Born, Bohemian Rhapsody, Christopher Robin, Eighth Grade, First Man

See all of my movie reviews.

A Star is Born (2018) - Bradley Cooper directs, writes, and stars in this third (at least) remake of the 1937 story. He is joined by the captivating and talented Lady Gaga. I assume you know the story, so here be general spoilers.

The original story is about a talented man whose best days are behind him. He is on the way out, but he finds and starts the career of the young woman. They fall in love. He is depressed, not only because he is no longer wanted, and is an alcoholic, but because he can't take the idea of a youngster and a woman besting him. Meanwhile, out of love - or maybe out of what is expected of a woman - she is on the verge of giving up her career because she thinks she can save him if they live a normal life. He overhears this and decides to end his life, either because he has finally reached bottom or so as not to allow her to give up her dreams for him.

This remake downplays the parts that make it seem like it is natural for her to give up her stardom for his sake. He has a drug and alcohol problem. She doesn't consider giving up her career, although she makes an attempt to get him booked on her tour, threatening to not do her tour if he is not allowed to join her. Her manager is a creep who flat out tells him that he is in her way, which leads him to end his life; this is far more sinister than having him overhear a conversation he should not have heard.

This is a pretty good movie, with good original music. Everyone gives a solid performance, and most of the camera work and directing is excellent (I had one or two minor quibbles, nothing major). The leads have good chemistry, and Lady Gaga's singing can blow you away; I suppose some will complain that no one can sing like Barbra Streisand in the second remake from 1976, but that movie wasn't as good as this one.

It is emotionally draining, however, if you have a hard time watching someone resort to suicide (not graphic, but the scene is long) or a woman having to deal with a lover who is an alcoholic and drug addict. Just so you know.

Bohemian Rhapsody - A biopic of Freddie Mercury of Queen, and also the story of Queen, from its founding until Live Aid. The main plot elements are Freddie vs his girlfriend Mary (as he comes to realize he is gay), Freddie vs his manager, Freddy vs some boyfriends and the swinging 80's lifestyle, Freddy vs his family and his traditional background, Freddy vs his contracting AIDS (only superficially covered), and Freddy vs his band-mates.

If you love Queens's music, of course you will love the movie. If you hate Queen's music ... what's wrong with you? Some of their songs, like We Will Rock You and We Are the Champions, seem like they were chiseled out of music itself. On its own merits, Rami Malek does a great job as Freddy, and Lucy Boynton as Mary and Gwilym Lee as Brian May also shine, as does the rest of the cast. The plot is captivating, since Freddy seems equal parts genius arranger and singer, but also self-destructive and helpless. Mary, if you believe the movie, is the one who drags him back into sanity, even while she is kept apart from him due to his sexuality.

As an ending to the movie, Live Aid, while a lovely concert, doesn't really answer all of the questions. If you know the real story, you know that a lot of the early days are skipped over or compressed (they went through a bunch of bass guitarists and their first album was not a great success), Live Aid was a phenomenal triumph, and the story continues to the early 90's. So threads are left dangling.

But it doesn't matter. Good performances and great music, an interesting portrait of a tormented genius. Not the best movie ever made, but worth watching.

Christopher Robin - Ewan McGregor plays a grown up Christopher Robin, famous son of A. A. Milne, who works as an efficiency expert in London and who is tasked with firing a bunch of people unless he can figure out a way to save their jobs. He runs into Pooh Bear who needs Christopher Robin to help him find more honey in the 100 acre woods. CR tries to make sense of this, and they go on several adventures. Everyone learns something by the end of the movie.

The closest analogy here would be Hook (Robin Williams). It's an okay movie, though rather childish and cliche. Kids will probably enjoy it. I got a bit bored.

It's a little odd to see this movie after last' year's Goodbye Christopher Robin, which painted a rather grimmer picture of CR's relationship to his father's stories.

Eighth Grade - A good but intense look at a high school girl (Elsie Fisher) who spends all of her time, and tries to find all of her validation, on social media. Her real life, unfortunately, doesn't conform to her expectations from her virtual one. Not only does she have low self-esteem and low popularity and fall for the wrong boy, she also runs head on into a few moments of real danger and harassment that up the significance of what happens in real life.

Josh Hamilton plays her single father, desperately trying to help and support her while she fights to keep him out. It's not an easy movie to watch, but it's a fairly good one.

First Man - A biopic of Neil Armstrong, and also the story of the mission to land a man on the moon. Unlike Bohemian Rhapsody, in which the focus on one character made the story interesting, I wan't as happy here. Neil has a few problems with his wife and kids, but not really; I'm pretty sure most of the problems were invented by the screenwriters. The conflict with his wife was not believably portrayed. Meanwhile, all the parts about the moon landing were fascinating, but they were not the main focus of the movie.

The movie makes several other mistakes. Instead of a grand story of triumphs and tragedies (i/e, what really happened), the story concentrates solely on a series of tragedies (real ones). I guess that's the screenwriter's way of ratcheting up the tension, but it a) makes the story very narrow and small, making it more like a Marvel movie than a real story, and b) it makes it unrealistic: why would anyone continue with a program that fails so tragically and continuously over and over, killing people each time? Of course, that wasn't the real or entire story. But we don't get to hear the real or entire story.

The worst parts for me were a) the long sequences of shaking cameras that simulated the shaking rockets and flights. One such sequence of reasonable length in a movie is great. This movie does this at least three times, for 20 minutes each time. At some point it moves from being a good simulation to being distracting and unwatchable. Enough already. 2) About sixty percent of the movie is a closeup of someone's face. This is the same mistake used in Jackie. Again: a few face closeups are great but 60% of the screen-time spent on face closeups is not, It's just pretentious, distancing, and annoying. Which is a crying shame, because the cinematography of the other 40% is beautiful.

Aside from all that was bad about the movie, the movie did everything else  well: well acted, well scored, well paced, and an important piece of history. For what its worth, my fellow movie-goers (friends) liked the movie.

Wednesday, September 9, 2020

Domain Authority 50 for your website - Guaranteed Service

We`ll get your website to have Domain Authority 50 or we`ll refund you every
cent

for only 150 usd, you`ll have DA50 for your website, guaranteed

Order it today:
http://www.str8-creative.co/product/moz-da-seo-plan/

thanks
Alex Peters

Friday, September 4, 2020

Screen Persistence And The GBA - LCD Abuse

The Game Boy Advance has a TFT LCD screen, and in its last variants, the screen was backlit.  TFT screens offer faster pixel response times over earlier passive matrix technology.  The GBA TFT LCD screen was improved over the earlier screens used for the Game Boy Color, but developers took advantage of the response time of these screens on occasion to make for interesting effects.  Let's take a look.


Read more »