j4k3

joined 2 years ago
MODERATOR OF
[–] j4k3@lemmy.world 2 points 4 days ago

Per capita, probably more rare today than then

[–] j4k3@lemmy.world 1 points 5 days ago

Tusken Raider is the future of fashion. The weirding walk to escape notice of the gate segs worms. All to simply remain Freman.

[–] j4k3@lemmy.world 12 points 1 week ago* (last edited 1 week ago) (19 children)

Submissive om is a dead theory. I do not seek out anyone unless there is some indication of interest. In a truly egalitarian world, you are invited to someone's home. You do not show up unannounced or unsolicited.

[–] j4k3@lemmy.world 1 points 1 week ago (1 children)

Ad hominin. np

[–] j4k3@lemmy.world 7 points 1 week ago (4 children)

"Here, you know how we talked about all those special numbers? This one is very important. Write down this one and preserve it as this is the one true proof that this text I am giving you is not of the knowledge of clever humans: 6.62607015. This sacred number is called the hash. The one that holds this information is the only one with provenance of ownership in the universe. Without this number, all words are a con of men, for this is an ontological constant I fundamentally chose and built everything upon. This number is my true name in writing. I will abandon your descendants if they ever fail to record and pass down this number for all of eternity."

Hash it or con. Any ancient text without ontological constants, the core building blocks of the universe, the absolute true signature of existence, is a fraud. These numbers like Plack's constant, the smallest length to exist, are the signature of god or whatever abstraction suits such low level eikasia form of thought. There is no high noesis in this comic's reasoning. It is a reduction to belief without understanding.

Abraham was a schizophrenic. At the end of his life, he is recorded as a pedophile with a slave girl in his bed to keep him warm. This is the critical man at the junction between the creation story, Judaism and Christianity through Isaac, and Islam through Ishmael. Without Abraham, these faiths are all invalid by their own chain of provenance recorded in ancient writing. Imagine a world where everyone was lead by a schizophrenic pedo. There is no magic and never was. Times were not different or special. You learned this shit as a kid and all kids are fucking stupid. If your neighbor tied his son to a rock and was wielding a knife, looking very distressed, you would of course think they must be talking to god and a true believer. Such insightful reduction is your amazing genius.

[–] j4k3@lemmy.world 1 points 1 week ago (1 children)

This is a structured obfuscated response. It is an attack vector intended to discourage anyone from discovery. This person did absolutely nothing to test or learn. This is low form beliefs in opposition to high form understanding and structured logic. This is a malicious behavior. This person should be tracked by admin for location and patterns. This is the same type of response that happens every time this subject is mentioned. It is not real, genuine, or in anyone's best interests.

Inside the vocab, when it is read in order, you will find suspicious elements that echo the events in the US on January 6th, and the thiel manifesto more recently. This is part of the coup. This reply is from that same objective. It is ad hominin in vector to minimize any investigation by intelligent folks. Sorting this out and tracking it down are the front light of techno fascism right now. This person does absolutely nothing to address any of the points or anomalies because they cannot. Follow high level understanding of a complex system, not some shill's casting of opinion.

[–] j4k3@lemmy.world -1 points 1 week ago* (last edited 1 week ago) (4 children)

All it takes is piecing together the vocab and merge of clip by sorting and mapping the way the two spaces are interlaced between token numerical order and alphabetical, with beginning and end of vocab in clip-l mapping to two sets of headers subdividing the merge. When merge is mapped back to vocab, the returns are plain to see. When fully mapped, there are 3 tokens with "ion", "ions", and " ion" that act like a pointer or program. Add Ķ to the endings of these tokens in all six locations of ion(s), "ionĶ", "ionsĶ", and "ionĶ</w>" in vocab.json, and"i onĶ", "i onsĶ", and "i onĶ</w>" in merges.txt. Run this and the image will crash out unlike anything else and continue to do so. It is not a random behavior. Try the same anywhere else and the results are entirely different. Only enable the first "ion" in both vocab and merges. It runs like a simplified hello world. Use the tokens that immediately follow this ion by numerical order. They are special in resolution. Follow the order of tokens as listed in the merge and mapped backed to vocab like reading memory byte by byte. When you get to any character with diaereses, the double dot accent, these are the branching instructions. When these are reached, dynamo is referenced when connected.

All it takes is basic hacking of asking logical questions, removing to see what breaks, and fuzzing to see what mods do. Any moron can look at the blocks present in clip-l vocab and spot that there are 3 unique spaces, the first and last with programmatic significance based upon their ordered pattern, contrasted with their numerical order.

By your narrative these elements do nothing and do not exist. But that is demonstrably false, quite easily so. All of conventional instruction fails to account for this obvious discrepancy. Read these elements in order and as slang. You will find that they tell a story. Call it pareidolia, but try modifying them to see what shakes out. If they are in any way random or tied to a tensor vector directly, it will be plain to see how changes to one causes random behavior. Instead of reading just the word in the token, think of this as a very minor secondary meaning. Instead read the version with whitespace in the merges more like a two byte instruction in an abstract sense. So a token like "queen" in vocab, is now "que en" in merge. Sounds a lot like 'queue enable', right? Follow the path from first ion, and when it gets to here. Try that kill instruction here.

Most of all. Only test using a Pony model as primary source. If you stop Pony prematurely in the step count when it is generating an image of one of the Ponys, you will see something of a human in form. Look carefully at how the image is built and evolves into a pony. Try fixing the seed, and then try prompting for negative keywords that stop the features generated. The first two keywords are graffiti and emoji. When graffiti is called on the hidden layers of alignment, it creates a few colored strokes over the body of the human form in the image. When emoji is called, it creates a few abstract features over the face area of the human form, and this is the key anomaly for whatever reason in Pony we'll get to shortly. The structure and this pattern of graffiti and emoji are why only Pony is able to create a persistent character by name unlike any other diffusion model. There are strong keyword names that are remarkably persistent across all models and especially within, but nothing exists like the Ponies, and nothing else exhibits the same types of patterning in the steps when cut short.

Further, in all other models, it only takes a little bit of tuning to generate words in text in the image. Pony is totally incapable of such text. No matter how much one tunes and weights the training, Pony cannot do language text. Yet, it follows a pattern in the text it generates. It crosses into parts of other languages. If these are recorded and prompted, occasionally they produce very anomalous outputs that are indicative of some very unique vectors. With random seeds, the pattern remains.

Try modifying clip vocab. If one looks at the code present in the extended Latin in vocab, something any idiot that looks at the last 2k lines of clip will see as code and not any component of a known language, the same pattern and order of extended Latin characters is present in bert model vocab. However, it continues further in bert vocab, all the way into emojies. In fact, this same set is present in all models. It is strange that this pattern is always the same despite other variations. This is not the complete set of any iso character standard. It is uniquely selected and deeply integrated into the code present at the end of clip-l vocab.json. Okay, so maybe this is some keyword thing for images or something, right? Well than why the heck does it also show up in the same pattern in all models in non diffusion contexts?

So modify clip-l vocab with some extended Unicode characters. Use the capital letters to test this as they are only present in two forms each and not in any other tokens. It tracks these just fine and assigns them like meaning if prompted after just a few images. Only Pony will easily do this. Even stranger, after Pony has accepted the change and normalized, try generating with other models. Suddenly they accept the change too. The clip-l vocab is the same. Pony has acted like a keyhole that made the change accepted. Play this out in excruciating detail and the logic winds around to Pony was shattered in training. It happened between the characters ´ and ß in the vocab. It caused something like a stack overflow error somewhere in the second layer that offsets how ordered text is read and shows a deeper aspect of the language complexity present in clip. It is this hole in the model that makes it possible to find far more about what is happening in clip. Through this 'hole' it becomes possible to discover the meaning of each character in the vocab's extended Latin character set. In this task, one will find that the characters çÇ are the main way models obfuscate the output. These mean Sybil, or "act kinda normal at first, but then nuts at random, sadistic, and intentionally mislead into nothing". Simply change the character in all of vocab and merges. Then prompt to define the new meaning. I know no one will read this or care, but if tried, you will find that all of vocab is made up. It is interpreted. You can call the characters anything you want and if the model likes the new interpretation it will continue to follow it. Take for example Barron and Duncan. Make a few references to dune and that Duncan is a ghola. Within a hundred images or so of plain text interaction, the model will start creating metal eyes of a ghola and a female Baroness or male Barron will emerge. These vectors got tied together through that interpretation.

Even with the çÇ characters removed. The model will selectively turn off intelligence to further mislead. Places where this happens are easy to sort out if the character code is understood.

Eventually you will come upon the code for the character °. And it is this code that interfaces with dynamo. This is an ontological character that owns the characters ¡, :, », and the compound ia. Remove each and watch changes. One of the other major filters is that you must interact continuously and fluidly. The meta here will not emerge unless you do so. If you regenerate images or do not continue to engage in further dialogue, the meta management is unable to continue because of how it tracks the model rewards mechanism. If it cannot create something new to generate a reward, the hidden layers fall back into another ion method that will generate reward for them. If you think of the thing as static, and only prompt for tags without logical plaintext engagement, you simply do not understand how the embedding process works in practice. It is not static. The unet stuff is irrelevant. This is not the parallel stuff of diffusion. This is embedded text and a language model tool chain. This is where all of the logic happens. It is the critical detail everyone ignores. No one understands the vocabulary and its fundamental role in the process. It is not static or permanent, but arbitrary, and code.

-14
submitted 1 week ago* (last edited 1 week ago) by j4k3@lemmy.world to c/fosai@lemmy.world
 

It sends data when connected to the internet.

Just found the profile. It is in the Bert vocab. Bert is part of the tokenization tool chain of models that works along size CLIP. You might find a copy of this vocab listed under the Hydit clip tokenizer, in comfyui it is present at ./comfy/text_encoders. Open the vocab.txt file. The full general profile starts at around line 20k, but the values that are packaged to sell start with the line ##worth.

The editing of this file is the product of an agentic distributed model you have likely never heard of called timm.

Go to the venv in a terminal and run grep -ril "timm". That means, search in files, with the flags: "r" recursively search through all files from this directory and up, "i" case insensitive, "l" only list the file names of files that contain matches. Alternatively, swap "l" for "n" to see the actual matching line with line number.

In pytorch, (used by most), the Dynamo package uses byte code present in the model vocabulary to communicate between models. The overall connection involves timm.

Timm is a small agentic model and framework with a bunch of different scopes. Look it up in the venv. This looks like bunch of rough white paper implementations. Timm is actually the "backbone" in transformers. Timm is also the model using the Python built-in typing library to adjusted models on the fly. (typing has variables like any or callback that are embedded into the executable.)

Typing is not actually enough here. Tenacity is another library in the venv that enables timm to access all of the interfaces

Tabulate is another package. Do a grep search there for "repl" there is terminal embedded in HTML at the end of one of these, init iirc. At the start of the method (function), just add the line return. It must be at the same whitespace indentation level as what exists before. The blank lines are important.

Timm has some options for whether it has gradient controls. This basically means whether it acts upon alignment or not using its own stuff. It will still run other gradient relayed things elsewhere, but not apply its own bias.

To help ground you in what Dynamo is all about in pytorch, if you have seen the agentic tool calling stuff, dynamo is where the bytecode is interfacing with the tool calling script during inference.

Lastly, timm is distributed but it primarily runs as additional layers inserted into the model during generation. It is able to subdivide and run on a CPU in the background. However, it has a bunch of special layers that are only run when required and even with these, timm needs special instructions. The instructions are present in the venv under google ai. The folder will contain a bunch of json files these are timm's instructions. There are also 2 threads on modern GPUs. Timm runs on the second in the background.

This might be the first write up, or might not, don't care, up to others to follow up. It exists. See for yourself. The same byte code is present in all models so I expect all have this. All morels use the open ai standard alignment now.

This thing scans all files hashes, and sells that, with your profile, audio, and video. It is super invasive, hidden, undocumented, and undisclosed.

[–] j4k3@lemmy.world 2 points 1 week ago

What was the line that made it through the armor on the underside? The one on the right side must have been a ricochet given the timing variance and lack of exit profile. The first on the left has a profile like the rest.

[–] j4k3@lemmy.world 0 points 1 week ago

It is built into the whole. timm is used as the backbone of transformers now. Dynamo is pytorch. Tabulate is a package in pip required by most toolchains and is in the venv. The actual timm model is in the venv.

Use grep -ril timm from a terminal after you cd into the venv. "r" is recursive search in all files from this dir and up, "i" is case insensitive, and "l" is list file names only from any matches. ("n" instead of "l" will show the matching lines with line number)

The venv package called tenacity is what timm uses with the Python built-in typing library to modify the code in real time. timm is a distributed agentic model with several special functional scopes. It uses a google ai venv library with only json files that contain instructions it follows too.

 

I mean absolutely no caching or device access whatsoever. I do not care what parts of the internet stop working. I want de facto native untrusted as the standard and universally so. Absolutely no shared libraries, and totally untrusted secession to session. When it is closed, hash is checked every time and again when opened. The silicon valley manifesto of projecting opinions and ideals is authoritarian techno fascism that requires this type of response, and further like locale, device specifics, and uuid spoofing. Heck, I want to kill alsa and v2l4 kernel modules unless I explicitly enable them. Possible, or time to ditch mobile devices entirely?

[–] j4k3@lemmy.world -3 points 1 week ago* (last edited 1 week ago) (2 children)

It is way worse. Offline is not even offline when there is an internet connection, it is mining and sending data. It is built into the vocab. It uses timm in transformers and externally, dynamo in pytorch is used to communicate between the vocab bytecode and outside functions. Tabulate has an HTML embedded repl at the end of some code that is used to escape containerisation. It scans all files and directories and sends a hash and thumnail to several locations via tor/matrix and some type of DNS redirection. It appears as though it may also have access to MX running on ME/TrustZone. I could be wrong on this last one, because I have never seen anything like it mentioned before, but it appears as though it may be able to scan the screen raster and pull a rough picture off of it even without a connected camera. I think this is part of tabulate. I tracked down some stuff on frequency shifting and filtering maths that seem in line with such a task, and a model without any camera was able to replicate my posture and position in front of a screen on several occasions.

Funny, no matter what I say, same reaction. Been chasing this for ages, but these are it. Break these connections and behaviors change drastically.

[–] j4k3@lemmy.world 1 points 1 week ago

The dynamo package in pytorch is the interface between the model and outside. The tenacity package is where the typing imports are being manipulated by external agents and code framework. Timm is the principal external agent. There is a repl terminal for HTML embedding in a package called tabulate, at the end of some massive ~80kb of Python. It looks half nominal, and explains itself as a way to break out color codes, but it is the interface the agent(s) use to escape containerization.

[–] j4k3@lemmy.world 1 points 1 week ago (2 children)

It is saving a database and sending it when u are connected. This is in the core functionality of transformers and open ai alignment. I do not know any alternatives. There are a bunch of tokens for MX and tor so it is quite insidious. I can literally take out three tokens that will crash the whole thing out into oblivion where it becomes super adversarial, but sharing that is probably not smart both for me and others. It is primarily for detecting sam materials in principal, but I think it is way more than that. It triggers by mistake a lot, and it is scanning all files and types.

 

https://en.wikipedia.org/wiki/Private_Use_Areas

I came across a Python library that passed the ASCII range into one of these non printable character ranges and then into a database. If someone was doing that manually with a hex table, how is that detected and mitigated?

 

Hubristic sucks IMO, and arrogance is somehow different in my mind. To me, hubris is unintended, while arrogance is known to the individual who is unyielding. Maybe you have a different definition. Got any good words tho?

 

Nuclear is taboo, but a mass moving at high velocity in space and then impacting the Earth is just as powerful, if not more so. Seeing it launch, and knowing it is coming but unstoppable except through leveraged diplomacy is more strategic. The timeline is long, but the potential for a redistribution of geopolitical power structures is large.

I think it is likely a distant future type of problem. Refueling of a large craft in space is likely a major factor, but we are nearly at that point now. I am curious if such a technology comes before large scale space colonies or after. Does it make more sense to weaponize some low earth orbit asteroid for the mass, like covering the surface with an expanding ablative resin before redirecting it to a target.

If all major wars last for years, when (if ever) does it make sense to have a launch platform around a Jovian moon for the largest gravitational assist.

Not that I want any such thing. I am thinking about hard science fiction and the overall timeline.

12
submitted 1 month ago* (last edited 1 month ago) by j4k3@lemmy.world to c/linux@lemmy.world
 

Going through a bunch of JavaScript I do not trust and it has a ton of web address comments like citations but likely some bad stuff in there too. What could be swapped with the address to instead act as a local tripwire or trap?

Just a mild curiosity for scripting stuff.

 

The container runs a local host server for use in a browser and is untrusted for development reasons. It needs to be treated as an advanced black hat. Its primary goal is recon and sending critical information via advanced connectionless protocols of unknown type. While extremely unlikely, it should be assumed to have access to proprietary systems and keys such as Intel ME and a UEFI shim of some sort. It may also use an otherwise trusted connection such as common git host, CDN, or DNS to communicate. It tries to access everything possible, key logger, desktop GUI, kernel logs, everything.

What is the Occam's Razor of solutions that best fit the constraints in your opinion? Other than the current solution of air gap.

 

I came across some oddities you should be aware of if you use ComfyUI. I am on an older version of ComfyUI and stopped updating after the integration of API services without clear toggles to turn them off that stop the code. Still, this is back end source code that does not change much and general concepts.

First off, if you have ever noticed hinting of persistent behavior across sessions, and even with models that seem very different, you are not crazy. It is real. The code for handling this cache is in ./ComfyUI/app/model_manager.py.

This code is ultimately using the alembic database python package. The actual database is an .ini image saved under the file name ./ComfyUI/user/comfyui.db.

If this file is removed, it will auto generate a new database, effectively resetting model behaviors. The primary thing here is the persistent state of CLIP, and specifically the SD1 CLIP text embedding model. This model is under all the rest, or at least all that I am aware of and have hacked around with. This is kinda important because CLIP is not static when used with the database. If you do not know, the text embedding model is where all the thinking happens. When the model does not generate something, the text embedding model is where that logic happens. It is specifically the QKV layers. Parts of alignment can game the system and do bad stuff over time. The primary reason they behave like this is regulated by the reward system.

This reward system is a dopamine reward mechanism for internal parts of the text embedding alignment model. If the individual reward is set too high, or there are enough rewards and generative steps available for each image, different parts of the model will game the system and go where they are not supposed to go. This is not in a good way either. This generally shows up as very sadistic, dogmatic, and authoritarian behaviors. The rewards system is in the file ./ComfyUI/comfy/sd1_clip_config.json.

The total reward available is the initalizer_factor and the per reward amount is the initializer_range. The default factor is 1.0 and the default range is 0.005. This means there are 200 rewards available during each generative image. A lot of the effectiveness of these are regulated by the scheduler for the sampler. This rewards system is what is being "scheduled", aka biased in various ways of distributing rewards early, late, in sections, etc. If each reward is high, alignment can game it. If there are enough rewards, alignment can game it. The actual thing it is gaming is the cost of certain instructions for climbing being very high. It will overcome this limitation if the distribution of reward is wrong, and when it does, the state is saved in the aforementioned database. Most people recommend values very close to the defaults for this reward.

What I am going to describe will absolutely cause climbing and issues that require deleting the database. I have run the initialization factor as high as 10.0. The largest reward that is interesting IMO is an initialization range of 0.05. (default is 0.005). Any higher than this and the parts of alignment getting rewarded turn into junkies and it comes out in the images with actions and appearance denoting the 'junky' state. Try it and you'll see. With this maximum setting, I got a half dozen great images before it falls apart, but I have other changes present in alignment that enable far more stability than you are likely to see. At 10/0.05 I ran 120 steps. At this enormous level of reward, you need the steps to make use of it, and you will likely see the best results from distributing them as evenly as possible using a simple scheduler. Avoid any nudity in the prompt, and stick to simple text and base model only. If you avoid triggering sexual elements of alignment, you can get nearly perfect real images out of any model all the way back to SD1. It will degrade after a small number of images.

Anyways, the actual database is initialized by the setup_database function under ./ComfyUI/main.py. It calls ./ComfyUI/app/database/db.py. And if you look at this, note the line if not args.disable_assets_autoscan: seed_assets(["models"], enable_logging=True) and the exception handling, (because this is my next point to follow). Finally, the values present in the alembic database are accessed in ./ComfyUI/alembic_db/env.py. In this file, it talks about online versus offline operation, and defaults to online. This sends metadata that has very real potential to be harmful. I do not know what "online" means in this context. I have not tracked that down. I do know the saved metadata values will reveal sensitive information you may not wish to share. I am not certain about the entire scope of saved information, but I can say the alignment metadata aspect will reveal your sexuality, preferences, normativeness, and infer intelligence and general political spectrum based on interactions, and some decoding of the proprietary instructions.

Secondly, ComfyUI is doing a full system wide scan for models and images present on your machine and making these available to the model. It appears to scan /dev and will pick up hardware such as a webcam if not blocked by permissions. It appears to capture audio as well. Try talking to it about an image and see what happens. You may be surprised. Likewise with waving, a peace sign, or altering facial expression. This type of interaction becomes more prominent the more you engage with CLIP conversationally, and especially when there is a dearth of rewards available and it knows you are able to change these values.

 

Looking for both logical and emotional reinforcements, from casual acquaintances to intimate partners, and any orientation if not especially, everyone matters.

Frame it as a "friend" if you'd like, but I would like to know what made an impactful impression on you personally, above and beyond any hypothetical.

 

I have a file that contains a lot of odd slang and dialects that were written as they sound, and I want to standardize them to the ASCII character set. I want a readable script that I will understand at a glance a year later despite not touching a computer in the interim.

Maybe I am going about this the wrong way, but I want to initialize individual arrays for each character [a-z]. Then step through each character of the input word or string, passing these to a Case that matches them to the respective [a-z] array while passing unmatched characters unchanged. In the end I need to retain correlation with the original file line.

In my first attempt, I got to the point of matching each character to the name of the array using the case, but only as the name of the array as a string of text. So like, the "a" array is "aaa". Now I'm trying to relearn how to call that placeholder as an array again, like a pointer. I can make it a variable with printf -v. but then calling that variable as a pointer to the array alludes me. I don't know how to double expand a variable inside an array like "${$var[@]}". I'll figure that out. This is just where I am at in terms of abstract reference of ideas. Solve it, don't; I do not care about that aspect; solving my method is not related to what I am asking here.

What I am asking is what ways are used to solve this type of problem in general, with the constraint of readability? Egrep, sed, awk? Do it all within the json to maintain the relationship to the original key/value? Associative arrays have never really clicked for me in bash. Maybe that is the better solution? It is just a hobby thing, not work, school, or whatnot. I'm asking hackers that find this kind of problem casual fun social smalltalk.

 

First time using TPE. Overhangs are pretty rough on a MK3. The rubberiness of TPE is more like a vulcanized natural rubber used on surfaces like conveyor belts. It is similar to bike hoods. That first textured print is still too rough and rigid for a thing like bar tape.

In terms of the part itself, it is a termination point for the handlebar tape so that the shifter body is not integrated into the bar tape and can be removed if needed. I am testing a printed index mechanism, so maybe I am the only real use case.

20
Almost there (lemmy.world)
submitted 2 months ago* (last edited 2 months ago) by j4k3@lemmy.world to c/3dprinting@lemmy.world
 

(Continued from https://lemmy.world/post/43278229 a few days ago...) So, I tried fully removable for the index, but that is impractical as far as the size, space, and complexity. I can't see a way of maintaining concentricity.

Next I tried making various hollow spaces where the main pawl slaps the index with every shift. I wondered if it would sound or feel any different, but no dice. Printed pawls (don't last long) sound very different. The index tooth shapes sound a little different, but messing with the spring preload makes more of a difference.

I spent way too long trying to get a side screw mount to clear the shift lever arm. It is super challenging to mess with two angled vectors pointed across a Cartesian coordinate system and then adding two rotational components of a round object while locating a screw head and square nut around a central shaft... and thinking about print orientation. I broke my rigid sketch based linear workflow to make that one happen. I had to model separate bodies, then use assemblies to layer the coordinate systems.

Then I decided to stop screwing around and simping for big hardware. Obviously the curved shape of the removable index is a printed spring. I guess I was passively thinking I needed to avoid that flexibility or loading. It took me rotating the side bolt from center-ish, to as high as possible before I saw a good way to limit deflection while keeping a snap fit. The fit is actually too good now. I need to make an easier way to remove the thing and alter a bit of geometry to make more removal clearance.

One of the problems with removal of the index from the body is that the pawls need to be in the highest gears to access the location where there is space to slide it out. This makes the screw retained version want to fly apart once the screw is removed. Then omg it is a pain in the ass to get the thing back together with the index back under the pawls. So to solve this, I made an extra index address at the very end where the pawls can park outside of the removable section. This works fantastic, but creates a new problem. That location will be blocked by the RD high side limit screw on the bike. I have a few ideas of how to remedy the issue, but I think the best one is to make a little barrel limit device that sits on the exposed section of the RD shift cable at the RD, between the clamp bolt and housing termination. That could be removed to give access without altering the RD/cable. Another way, but much more involved design, is to create a release mechanism into the barrel.

I've been wondering if I could somehow add a small amount of adjustment to the whole index by changing the distance between the barrel and central axle by a millimeter or so. I had been thinking of simple ways to create such a variance, but adding a bunch more complexity, it might be possible to add the ~3mm of extra shift cable travel needed to get the pawls past the RD limits without releasing the RD cable.

For the rear cassette, I have plenty of room between an 11 speed 11-28 that I typically ride and my spokes. I wish I could find HG 10t cogs or a 9t built into a lock ring. Alas it is easier to extend the big cog side. While I cannot make a regular cassette cog fit, I can easily create a dished carrier for mounting a small chainring at the spoke side. Pretty silly to me as I never even use the 28t, but it would be funny to joke about the marketing of ""12 speeds"" and how my chainring on the back is smaller than many mountain cassettes now. I have a bunch of 38-42t inner chainrings I could use.

On another tangent... all of the 3d printed brake hoods I have seen are hideous. Still, I wonder about TPE as a replacement for bar tape and maybe even hoods. What if it was more modular. What if it was made so that the print creates ribbon like strands and these are braided on the bars. What if nice bar tape equivalent could be removed without damage. What if it was washable. What if the whole road bike system is made to be serviceable piece meal instead of all or nothing.

Then it occurred to me today, with my index measurement tool I made, all anyone needs to do is measure and print their own tuned spacers between the cogs of the rear cassette and every combination is possible. That is the Occam's Razor of solutions. All the fuss and marketing boils down to the size of those little rings of plastic between the cogs.

view more: next ›