The Man Behind the Meme - Interview with Alphakek

Apr 14, 2025

The Man Behind the Meme - Interview with Alphakek

Apr 14, 2025

The Man Behind the Meme - Interview with Alphakek

Apr 14, 2025

We caught up with Vladimir, founder of Alphakek, to dive into the wild world of AI-generated memes—and discovered there’s way more going on behind the scenes. From building custom models trained on crypto culture to developing serious tools like their proprietary knowledge graph Fractal and the mysterious Magic Gateway Protocol, Alphakek is where internet humor meets cutting-edge innovation.

Here’s what Vladimir had to say about memes, models, and the future of AI-powered creativity.

Tell us a bit about Alphakek—how did it all start? What inspired you to build an AI that generates memes?

Nice question! I trained my first AI model over 10 years ago. Since then, I have developed AI solutions for video recognition, code analysis, astrophysics, and graphic design. Now, I am making crypto-native AI models for crypto data, autonomous agents, and memes. I love that even the current AI field is the intersection of the most recent advancements in computer science, sophisticated math, and pure alchemy. It’s just so much fun. Adding memes only makes it funnier.


Our community knows you best for the Alphakek Meme Generator on Vertical Stream— but that’s just a small part of Alphakek. What other cool projects are you cooking up behind the scenes?

Aside from using custom fine-tuned uncensored models, one of the essential features that makes us stand out is our proprietary knowledge graph named Fractal. It aggregates hundreds of thousands of onchain and offchain crypto data sources. It provides a real-time context and awareness to all AI agents powered by Alphakek. It could be used for both crypto research and meme-making. If you want to know more about it, you can check out our presentation on the Nvidia website here.
Another thing we’re working on is the Magic Gateway Protocol, or MAGE. The details are secret for now, but I’m pretty sure you will see it on Vertical AI quite soon!


Memes are chaotic and unpredictable. How did you approach collecting data and training your model to understand humor?

How we collect data depends on the case: sometimes, like with our 4chan meme model, we just collect hundreds and thousands of images ourselves. In the case of crypto-specific memes, it’s easier as teams usually have their meme archives. When there’s not enough high-quality data, we employ our in-house artist. 
Humor is a much trickier part, and it’s not fully solved yet. We rely on combining custom memes with our knowledge graph Fractal to make AI aware of the cultural context of each situation so it can come up with the best memes. We call it Chain-of-Meme.


What was the biggest challenge in training your AI? Any funny or unexpected results along the way?

The biggest challenge, I would say, is that every dataset is very unique and requires its own approach. While we streamline and automate as many things as possible, there’s always some manual pre/post-processing that needs to be done to achieve the best results.
Speaking of funny results, last month, one of the open-source AI libraries we’re using released a new version that fixed a bug related to model training. However, our training pipelines were built with this bug in mind, so the fix actually decreased the quality of our models! Before landing a proper fix, we forked the library and re-added the bug to the code so we can use the latest version of the library while also enjoying our “not a bug”.


For those interested in training their own AI models (maybe even for memes), what advice would you give?

I think the best place to get started is one of HuggingFace’s courses. They cover all the popular topics: LLMs, Diffusion models, and Reinforcement Learning.


Once a model is fine-tuned, how do you keep improving it? What strategies do you use to make it even better over time?

There are two ways we keep improving them:
1. Add more data;
2. Re-train the model using the newer architectures and/or training pipelines.

Sometimes, we do both at the same time! The most recent examples are our Okayeg and Brett meme models, both of which are coming to Vertical AI soon.


What’s next for Alphakek? AI-generated deep-fried memes? Fully automated shitposting bots? Or are we heading toward a future where the singularity is just an endless stream of Wojaks?

Yes. All four of these scenarios are going to happen, and Magic Gateway Protocol will be the cornerstone of it. Stay tuned!

Interested in creating your own meme? Visit Alphakek's Meme generator now on Vertical Stream

Follow Alphakek on X (Twitter): https://x.com/alphakek_ai


We caught up with Vladimir, founder of Alphakek, to dive into the wild world of AI-generated memes—and discovered there’s way more going on behind the scenes. From building custom models trained on crypto culture to developing serious tools like their proprietary knowledge graph Fractal and the mysterious Magic Gateway Protocol, Alphakek is where internet humor meets cutting-edge innovation.

Here’s what Vladimir had to say about memes, models, and the future of AI-powered creativity.

Tell us a bit about Alphakek—how did it all start? What inspired you to build an AI that generates memes?

Nice question! I trained my first AI model over 10 years ago. Since then, I have developed AI solutions for video recognition, code analysis, astrophysics, and graphic design. Now, I am making crypto-native AI models for crypto data, autonomous agents, and memes. I love that even the current AI field is the intersection of the most recent advancements in computer science, sophisticated math, and pure alchemy. It’s just so much fun. Adding memes only makes it funnier.


Our community knows you best for the Alphakek Meme Generator on Vertical Stream— but that’s just a small part of Alphakek. What other cool projects are you cooking up behind the scenes?

Aside from using custom fine-tuned uncensored models, one of the essential features that makes us stand out is our proprietary knowledge graph named Fractal. It aggregates hundreds of thousands of onchain and offchain crypto data sources. It provides a real-time context and awareness to all AI agents powered by Alphakek. It could be used for both crypto research and meme-making. If you want to know more about it, you can check out our presentation on the Nvidia website here.
Another thing we’re working on is the Magic Gateway Protocol, or MAGE. The details are secret for now, but I’m pretty sure you will see it on Vertical AI quite soon!


Memes are chaotic and unpredictable. How did you approach collecting data and training your model to understand humor?

How we collect data depends on the case: sometimes, like with our 4chan meme model, we just collect hundreds and thousands of images ourselves. In the case of crypto-specific memes, it’s easier as teams usually have their meme archives. When there’s not enough high-quality data, we employ our in-house artist. 
Humor is a much trickier part, and it’s not fully solved yet. We rely on combining custom memes with our knowledge graph Fractal to make AI aware of the cultural context of each situation so it can come up with the best memes. We call it Chain-of-Meme.


What was the biggest challenge in training your AI? Any funny or unexpected results along the way?

The biggest challenge, I would say, is that every dataset is very unique and requires its own approach. While we streamline and automate as many things as possible, there’s always some manual pre/post-processing that needs to be done to achieve the best results.
Speaking of funny results, last month, one of the open-source AI libraries we’re using released a new version that fixed a bug related to model training. However, our training pipelines were built with this bug in mind, so the fix actually decreased the quality of our models! Before landing a proper fix, we forked the library and re-added the bug to the code so we can use the latest version of the library while also enjoying our “not a bug”.


For those interested in training their own AI models (maybe even for memes), what advice would you give?

I think the best place to get started is one of HuggingFace’s courses. They cover all the popular topics: LLMs, Diffusion models, and Reinforcement Learning.


Once a model is fine-tuned, how do you keep improving it? What strategies do you use to make it even better over time?

There are two ways we keep improving them:
1. Add more data;
2. Re-train the model using the newer architectures and/or training pipelines.

Sometimes, we do both at the same time! The most recent examples are our Okayeg and Brett meme models, both of which are coming to Vertical AI soon.


What’s next for Alphakek? AI-generated deep-fried memes? Fully automated shitposting bots? Or are we heading toward a future where the singularity is just an endless stream of Wojaks?

Yes. All four of these scenarios are going to happen, and Magic Gateway Protocol will be the cornerstone of it. Stay tuned!

Interested in creating your own meme? Visit Alphakek's Meme generator now on Vertical Stream

Follow Alphakek on X (Twitter): https://x.com/alphakek_ai


We caught up with Vladimir, founder of Alphakek, to dive into the wild world of AI-generated memes—and discovered there’s way more going on behind the scenes. From building custom models trained on crypto culture to developing serious tools like their proprietary knowledge graph Fractal and the mysterious Magic Gateway Protocol, Alphakek is where internet humor meets cutting-edge innovation.

Here’s what Vladimir had to say about memes, models, and the future of AI-powered creativity.

Tell us a bit about Alphakek—how did it all start? What inspired you to build an AI that generates memes?

Nice question! I trained my first AI model over 10 years ago. Since then, I have developed AI solutions for video recognition, code analysis, astrophysics, and graphic design. Now, I am making crypto-native AI models for crypto data, autonomous agents, and memes. I love that even the current AI field is the intersection of the most recent advancements in computer science, sophisticated math, and pure alchemy. It’s just so much fun. Adding memes only makes it funnier.


Our community knows you best for the Alphakek Meme Generator on Vertical Stream— but that’s just a small part of Alphakek. What other cool projects are you cooking up behind the scenes?

Aside from using custom fine-tuned uncensored models, one of the essential features that makes us stand out is our proprietary knowledge graph named Fractal. It aggregates hundreds of thousands of onchain and offchain crypto data sources. It provides a real-time context and awareness to all AI agents powered by Alphakek. It could be used for both crypto research and meme-making. If you want to know more about it, you can check out our presentation on the Nvidia website here.
Another thing we’re working on is the Magic Gateway Protocol, or MAGE. The details are secret for now, but I’m pretty sure you will see it on Vertical AI quite soon!


Memes are chaotic and unpredictable. How did you approach collecting data and training your model to understand humor?

How we collect data depends on the case: sometimes, like with our 4chan meme model, we just collect hundreds and thousands of images ourselves. In the case of crypto-specific memes, it’s easier as teams usually have their meme archives. When there’s not enough high-quality data, we employ our in-house artist. 
Humor is a much trickier part, and it’s not fully solved yet. We rely on combining custom memes with our knowledge graph Fractal to make AI aware of the cultural context of each situation so it can come up with the best memes. We call it Chain-of-Meme.


What was the biggest challenge in training your AI? Any funny or unexpected results along the way?

The biggest challenge, I would say, is that every dataset is very unique and requires its own approach. While we streamline and automate as many things as possible, there’s always some manual pre/post-processing that needs to be done to achieve the best results.
Speaking of funny results, last month, one of the open-source AI libraries we’re using released a new version that fixed a bug related to model training. However, our training pipelines were built with this bug in mind, so the fix actually decreased the quality of our models! Before landing a proper fix, we forked the library and re-added the bug to the code so we can use the latest version of the library while also enjoying our “not a bug”.


For those interested in training their own AI models (maybe even for memes), what advice would you give?

I think the best place to get started is one of HuggingFace’s courses. They cover all the popular topics: LLMs, Diffusion models, and Reinforcement Learning.


Once a model is fine-tuned, how do you keep improving it? What strategies do you use to make it even better over time?

There are two ways we keep improving them:
1. Add more data;
2. Re-train the model using the newer architectures and/or training pipelines.

Sometimes, we do both at the same time! The most recent examples are our Okayeg and Brett meme models, both of which are coming to Vertical AI soon.


What’s next for Alphakek? AI-generated deep-fried memes? Fully automated shitposting bots? Or are we heading toward a future where the singularity is just an endless stream of Wojaks?

Yes. All four of these scenarios are going to happen, and Magic Gateway Protocol will be the cornerstone of it. Stay tuned!

Interested in creating your own meme? Visit Alphakek's Meme generator now on Vertical Stream

Follow Alphakek on X (Twitter): https://x.com/alphakek_ai


Latest Post
Latest Post
Latest Post

Apr 14, 2025

From AI-powered memes to real-time crypto intelligence, Alphakek blends internet humor with serious...

Apr 14, 2025

From AI-powered memes to real-time crypto intelligence, Alphakek blends internet humor with serious...

Apr 14, 2025

From AI-powered memes to real-time crypto intelligence, Alphakek blends internet humor with serious...