if you want to know what i did today,
i reluctantly got up at 15:00 not toohungry headed straight to the computer my alcoholism has subsided
i rapped for two hours on song10 i realised the errors in my enunciation if i rap schoolboyq attention, i can grip onto the beat perfectly but the flow is sinful if i rap dave’s trojan horse, my flow is much better even though i’ve not rapped that one before, and really can’t mimic his flow
it reminds me of what a friend said when i spit to a uk song, it actually sounds good and the specific detail i notice is the ease of gliding over words, not mumbling, but rhymically connecting
it’s like i cant do it with US lyrics, irrespective of whether i use a US accent or not - but u will notice it’s a lot easier to sing a US rap song with a US accent than forcing my natural accent onto a US song because the accent dictates how you connect the words whether u enunciate your Ts a certain way or not
specifically, for a UK song, i float much better, not if the bpm is slow, and i properly enunciate, but if the bpm is fast, or instead i hustle abreast the beat
more syllables is better with my london accent.
rapping is remarkable; these musicians aren’t doing something simple
my practice will be useful for the series when i do the vocals
i was using deepseek reasoner1, primarily because it’s cheaper, to extend the 12 bars of my rap already written.
GOAL: this is a rap. maintaining the essence of my lyrics, continue the fourth paragraph with more lines starting with "u"
why dont we go to club this week
why dont we go to club this week
why dont we go to club this week
why dont we go to club this week
everything u do
u dont do shit
ur at home writing, u aint write shit
u dont even like what u wrote this week
nothing matters anymore
u dont realise what the world will give
when we grow up and we get up and we graduating
u start doing things that u never wanted
u start fucking round with an ugly bitch
u desecrate yourself for the team, u hyprocrite
u do eveything in service of the plinth
u stutter and siphon all ur lifeforce in the crib
u choose everything but for what? why him
u do what u do - it's a sin - it's a sin
this made me go wow at reasoning models
because, while it was going some very stupid places
in its reasoning tokens, it genuinely made sense
of my cadence of writing here
The existing lines after "u dont even like what u wrote this week" are pretty raw and critical,
focusing on self-doubt and hypocrisy. I need to maintain that edgy, confrontational tone.
They mentioned keeping the essence,
so I need to use similar themes like internal conflict,
self-sabotage, and maybe some gritty imagery.
Looking at the existing lines: "u desecrate yourself for the team, u hypocrite,"
"stutter and siphon," "everything but for what? why him," and "it's a sin."
These lines are personal and accusatory.
The continuation should mirror this structure and intensity.
I need to start each new line with "u" and keep the same rhyme scheme.
Let me brainstorm ideas.
Possible angles: self-sacrifice, betrayal, loss of identity,
existential questions.
Use harsh verbs to show degradation.
Maybe include metaphors related to self-destruction,
like burning bridges, hollow actions, wasting potential.
Words like "slog," "ghost," "leech," "gravedigger" could fit.
Rhymes with existing words like "crib," "sin,"
"him" could be "-id," "-ick," endings.
Let me draft a few lines:
u burn bridges just to light up a cig, u lost the spark—what a fix,
u bitter pill u never spit u slog through days like a ghost in the grid,
u trade your soul for a counterfeit quid u claim you’d die for the art,
but u ain’t lived shit,
u leech the love then pretend u never did u grin through the grit,
but the mirror’s a critic, u built a throne
but u still feel like a trinket u dig your grave but blame the world for the pit,
u twist the narrative til the truth’s invalid u scream
"why me?" but the silence don’t flinch, u cling to ashes then wonder why u stink
Check if these lines flow with the original.
Looks like they maintain the "u" start,
similar themes of hypocrisy, self-sabotage,
and internal conflict.
Use of gritty imagery like burning bridges, ghost in the grid,
counterfeit quid.
They also follow a rhyme scheme with internal rhymes and end rhymes ("cig/fix", "grid/quid",
"shit/did", etc.) matching the original's style.
Maybe adjust some lines to better fit the rhythm if needed,
but this seems to continue the essence as requested.
it didnt quite get it, so i told it
thats a good start
new goal:
as u can see in this paragraph, theres the “u” component but theres also a certain kind of syllabic rhyme i employ that i want you to continue that u didnt quite use in your first example when u consider rhyme, consider that u can be really rather creative with the flow provided the last word of a line rhymes with the last word of the previous line this is the rule of rap
u start doing things that u never wanted
u start fucking round with an ugly bitch
u desecrate yourself for the team, u hyprocrite
u do everything in service of the plinth
u stutter and siphon all ur lifeforce in the crib
u choose everything - but for what? why him
u do what u do - it's a sin - it's a sin
this response looks like nothing but no model, no matter how i’ve prompted, has ever done this before
and the difference between this and o1 is that the lack of censoring, allignment, whatever, IT IS UNLOBOTOMISED meaning it just does what u want it to, including understanding the writing
i later used the model again but it gave me what i wasnt looking for whatsoever. it’s so sensitive to the instruction u provide in the prompt. dont specify enough and it loses 40% of its intelligence
BUT.. this way made me revisit an old idea i had by using the loosely related text to my written subject matter, it gave me a framework to string words together in my rap flowstate since i started doing music regularly, i recognised this flowstate, where you string words and phrases together from the ether. my writing certainly helped this but just having the beat there is a huge factor. add the words, and my new genre of serious gibberish is borne
i dont want to rap about anything specific but i do want my voice, as an instrument, to be present and peppering it with appropriately relevant random singular lyrics certainly helps it
on the other side, i’ve done $100 in openai credits solely on o1 in 1 week ive pounded this model same as r1
i hate on twitter, but when it has signal, it has strong signal. and the posts on the ways to leverage o1 as a good model exactly 12 days ago led me to my experiments i spent a weekend just going hard, seeing if it would solve all my challenges in coding and writing certain things; the same way o1-mini is the reason my website exists
note: i also mean signal for mainstream monkey shit if u use it right, u can get a finger on the pulse of public consensus for what is going on, and how people feel. remember, how people feel a lot of the time is just how everyone else feels this sounds like it makes things easy but it doesnt if you’re not one of these people, so twitter is good for this
i still need to try the model with a more comprehensive kind of tasks but theres too many things to do; i’ll get to it in due time
since the whole debacle with deepseek came out, i’ve spent the last few days shocked by how few people know what the fuck they’re doing, and for how many certain things just exist for them to amplify, or project, random prejudices. but the 99% just repeat what someone else says with a random twist of a meme. in fact, most of the retards were just making memes random niggas celebrating THE DOWNFALL OF SILICON VALLEY because every h100 consumes 3 trillions gallingtons of water per chatgpt query, or so they read somewhere, and rotherham rape gangs with mexican gpus are stealing LGBTs the truly funny one on twitter is the vehicular goonercide dude’s gooneral - it made me wonder how ruben will go beyond reality with his new movie we can’t outwrite real life. i think for everyone as close to the models, we consider it an “ehh” i’ve used it for a week; it’s very good reminder: most people only properly understood how to use a reasoning model in the last 2 weeks r1’s benchmarks showed it was o1 level but so do the benchmarks many other labs show every time they CHOOSE to finally publically release a model instead of wink-wink nudge nudging it that they don’t have anything
we all knew no one has any specific knowledge advantage so why are the normies behaving so flabbergasted?
i set out to find out why
by that i mean, make absolute sense of deepseek factually
i dont like reading research papers because last time i did, the llama3 one was 60 pages long, and i spent an hour reading it i learnt more than i ever could in the year i’d been using llms im also in a unique situation where ive studied enough to know what the fuck the terminology in the paper means. if i escape my bubble and ask how many people in the world know what inference means? but still, this time i reluctantly went through all of deepseek’s papers, compared them to llama’s, did some due diligence, looking through some sources, to prove my assumptions sometimes the researchers leave in little nuggets where u also learn how to properly use the model, but i also wonder if thats just because ive been creampieing the reasoning models so much already i know how its dussy already feels
they’ve had their marketing machine churning, building smart ceo; smart team i gave my DD to ian, and explained to him many of the things. my ting is at first i was dubious because of what i read on twitter my mistake. im retarded; why do u think im unemployed? i wanted to prove myself wrong, so i dug deep, and it is very plausible they did the “final training” on 2048 gpus; databricks has done it too.
they say h800s cos of sanctions but it could well be h100s.
i also believe their cluster of actual gpus they have access to
is far higher.
their 5.76 million usd estimated “final training” run costs was for
deepseek v3, not r1 mind you. r1 costs are a mystery.
if they’re doing $500 million usd in annual running costs, there
is no way they dont have access to more gpus,
even if they dont use them for “final training”.
but more importantly, who is bankrolling these guys u cant just be a hedgie, using the coin from your porfolio to bankroll a generational team of over 100 researchers. shit aint adding up deepseekv3, the only model theyve made that isnt worthless, after r1, which is very new and opensource, costs 0.85usd per 1 million tokens. on 29th december, autists at home used openrouter, so they did $3,910 usd in api usage in total for the whole day… if we times that by x1000, that would be $3.91 million usd. even if we x10,000 it does not reconcile deepseek’s 500 million usd running costs, even if in china the usage is x100,000 that, it doesnt. and i dont know shit about the economics of inference. i quickly checked microsoft earnings and theyre profitable in the datacenter, but they also gouge. WHERE IS THEIR MONEY COMING FROM?
and theres so much noise online, even from the chinese insiders, that everything is getting retarded
i have to speculate by myself because no one cares about my opinion this one guy steve hsu is pretty insistent that in the timescale of 3-5 years, demand for nvidia chips will be down greatly due to the shift in efficiency that allows ai to be run on phones and other small, lower power devices, meaning nvidia’s hunky h100s and b200s will be under much less demand. he also talks about the sleepy joe sanctions on china that made chinese companies band together to cooperate on figuring out and solving their own chip production issues. he admits that china’s lithography is dogshit, and will take a lot of time to figure out, as will their chips. the chips will happen fast, and the lithography will eventually happen.
i agree, but while he admits usage of ai can 3-5x in that time, maintaining demand, and nvidia’s software grip on companies, what about their ability to use the ai itself to recursively improve their research, and then in tandem with that, all the robotics jensen is so horny and erect from which will EXPLODE the hunger for his very own chips
the guy building the models has his own cluster, and his own researchers, and he’s competing with his own software
seeing how good these reasoning models are, but also how fucking slow they are, we need fucking gpus it takes like 11 fucking h100s to run one instance of an r1 model with no quantisation, even with all its efficiencies. some retard called matthew is suggesting a cpu-only build for running the model at the cost of 6k usd for 6 tokens per second lmaaoooo what a retard!!!
back to steve he knows a lot of his stuff but he has too many blindspots
sometimes comes off like a poltical cheerleader, even though i think he has no reason to be, and in fact im sure he just wants people to see the facts for what they are
i will say that i need to run an ai to just read all the chinese articles i literally have all the tools to read and translate and a lot of info is there, just because we all only speak english, none of us are reading it
fuck lisa su what a bitch i trusted u obliterated my amd -30%!! my god, woman never again
remember, i’m one of the few people actually using all these fucking stupid shits day to day
currently finetuning my 2nd f5 tts model it’s on epoch 1105 taking a long time on a 3090TI we need more GPUs! fuck python fuck virtual environments
gonna finish the movie by end of this month because this is the last voice i need to train and that will conclude the hardest of my work
theres still some sound mixing and designing but ive done all that before and trailers and colour grading and music i hope the audience will adore
to split up the audio, i used an automated model that uses whisperx. i used o1 to fix the code because the repo i got it from it didnt work so this is 1 example of a thing i could do that i wouldnt have been able to without the reasoning models. claude 3.5 sonnet was useless as usual.
im confident now i can build about 3 different applications off of this beautiful new paradigm. things i couldnt build before. IT’S DEEPLY PAINFUL TO WORK THIS WAY & I WANT TO KILL MYSELF EVERYDAY but theres some low hanging fruit that solves problems; let’s see how much money i make.
my biggest weakness these days is going on twitter. i’ve just added that to my whiteboard; im automating away all and any news everyone is fucking retard it’s getting tedious
my friend’s boyfriend’s family is being mean on chinese new year but she always has me
before i go, one thing the only reason i could use o1 was because i worked at a company that had tier5 api access it costs $200 a month for tier5 api access, which required u to have spent $1000 on api calls. u have to be running a business to do that, so openai shut out a lot of people a lot of people doing great things like me with that decision. the only reason i have access is because of a lucky connection i have, and this is fucked up. something i did nearly a decade ago is the reason why i have access today. i spend £340 a month living; i’d never have the 1k usd to make in api calls, so i would have never had access to intelligence. deepseek changed this with their model; they made it open and they made it possible for me to create things that are going to help a lot of people