Posts

Showing posts from April, 2025

Napkin.ai: A Review

In my latest video, I explored Napkin.ai and how it can quickly generate impressive data visualizations for presentations straight from plain text. It shines when dealing with concrete information, making charts and graphics with ease. However, it struggles with more abstract concepts—like visualizing emotions or intangible ideas. I wrapped up by questioning whether niche AI tools like this can stay relevant as all-in-one platforms like ChatGPT continue to grow in capability.  

Humanize AI: A Review

I started by breaking down how Humanize AI claims to make AI-generated text sound more human and bypass detection tools. At first, it actually managed to fool GPTZero with a short essay, which was impressive. But once I tested it on longer pieces, it completely fell apart. In the end, I found it pretty unreliable and couldn’t ignore the ethical red flags around using tools like this in the first place.

What is Right?

Image
The way we humans decide what is right and what is wrong will always depend on what the majority of the population thinks. Always. Let’s take an example. Around the 15th century in Europe, a type of medicine was extremely prevalent: corpse medicine. It was widely believed that consuming parts of the human body (think powdered skulls) could cure things like ailments and pains. It wasn’t just believed either; it was socially acceptable. Right now, anybody you ask would be disgusted by it, “How could they even think of doing that?” Because throughout history we can see that ideas and morals have evolved. Not improved. Evolved. Why do I say that? Instead of the past, let’s jump to the future. With veganism spreading farther and wider, it is safe to assume that maybe in 50–100 years, maybe even in our lifetimes, it won’t be “right” to eat, let’s say, chicken. You can’t kill one, you can’t eat one. Those people in the future will look at their history books and think that we were morally wro...