[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y ] [Home]
4chanarchives logo
Well... I'll start off by saying I don't have high
Images are sometimes not shown due to bandwidth/network limitations. Refreshing the page usually helps.

You are currently reading a thread in /diy/ - Do It yourself

Thread replies: 49
Thread images: 10
File: evs.jpg (41 KB, 400x300) Image search: [Google]
evs.jpg
41 KB, 400x300
Well...
I'll start off by saying I don't have high expectations for this thread, but I figured it's worth a shot. I recently bought a m68hc11pfb with a 68hc11e20em (the 2 combined are also known as the m68hc11evs) emulator board from a surplus auction. I have experience with, and have been able to find a good amount of documentation for the m68hc11e20 emulator board. I can't find the slightest hint of information for the hc11e20em variant. The boards look very similar but have noticeable differences. Notably the flex cable for the hc11e20 is not compatable with the hc11e20em. I'm starting to think what I've bought is some kind of prototype for the hc11e20. Does anyone have a clue what I've stumble across here? More importantly, does anyone know if it will be compatable with software for the hc11e20 model?

inb4 try it yourself. I'm waiting on usb to db25 cable to arrive in the mail.
Pic is the hc11e20 model.
>>
>>960028
Whats it do?
>>
File: e20em.png (3 MB, 1920x1080) Image search: [Google]
e20em.png
3 MB, 1920x1080
This is a pic of the e20em variant.
>>
File: e20em and cable.png (857 KB, 1920x1080) Image search: [Google]
e20em and cable.png
857 KB, 1920x1080
>>960030
It allows you to program, test, and troubleshoot the motorola 68hc11e20 microcontroller.

This pic is what happens when I try to use the e20 ribbon cable for the e20em variant. It's possible the only difference is the pin out, but I just don't have that kind of luck.
>>
Furthermore, there are wires soldered to the board I bought. I don't know if these were put there by Motorola to correct a mistake in pcb routing or if some idiot was using this for solder practice.
>>
>>960037
>68hc11e20
That seems oddly specific. a google search only shows up data sheets, mouser saying the shits discontinued, and pictures of obscure circuit boards.
Why are you screwing with this microcontroller? sounds like a good story.
>>
>>960061
I've used this system before and I'm comfortable with it. Yeah it's old, but it's easy to program. A 68hc11 isn't powerful by today's standards, but it would work perfectly fine for small DIY projects. Wish I had a better story for you.

I've been wanting to make a nixie tube clock for a while now. I'm thinking that will be my first project if I can get this thing functioning.
>>
>>960091
oh cool, can you still buy this mc? If so whats the current price per piece? I checked on my self just out of curiosity but I say prices at $15 a piece and noped the fuck out of there.

I wish they would start making nixie tubes again, eventually all of the ones left are going to be used in clocks
>>
>>960310>>960310
>>>960091
>oh cool, can you still buy this mc? If so whats the current price per piece?
Kinda. Over the past several days I've found several a series and e series, but not specifically the e20. The good news is this emulator board is capable of emulating several e and a series chips. So even if I never find an e20 (to be placed in a finished project) I have several similar chips to choose from.
I got my USB to serial cable. It took some playing around, but this thing actually works. I'm getting some weird fluctuations on ports a and d but that could be operator error. I also noticed last night that someone has replaced the e20 chip that is supposed to be on the board with an e1. The chips are very similar but that does leave some potential for a little bit of weirdness. I'm still hoping to find an e20 (to be placed in the emulator board) to eliminate the potential for said weirdness, but so far it's looking like a lost cause.
From what I've seen $10-20 seems about right.
It's a long shot but if anyone knows of a commercial product that used an e20 I'd like to know about it. Salvaging one might be my only option.
>>
>>961300
I admire your dedication, and I know where you're coming from with wanting to use something familiar.
Do you know C? if so you can program atmel and I think pic using C. At some point you have to give up, for me $20 per chip is when I source cheaper more efficient options.

I'm really curious though, so this whole thing programs and emulates just a single chip micro controller?
>>
One of the hardest things I ever had to do was fire a guy because he couldn't move on.

He was married to a particular controller, knew her like the back of his hand, and the two made sweet music together.

The frustrating part was that he was brilliant, and young enough to still learn new tricks, he just chose not to keep up with the times.

Lucky for him there were plenty of old equipment out there using his bride, so the cuck was able to feed himself doing repair work. At least until those machines are all discontinued.

He threw away a promising career, and any chance of making a name for himself. I'm sure his is not the only tragedy of it's kind.

Far be it for me to tell you to dump that cheating slut who holds you back and sucks the life out of you. That is a decision only you can make for yourself.

Maybe you're scared of being rejected by sexy young technology. Perhaps you think you can't keep up with these little vixens. Could it be morality that prevents you from getting your hands on these loli boards? So sinful and experienced for their age... oh the things they can do.
>>
>>961382
lost
>>
>>960028

> 68k

nice
>>
>>961357
It emulates several A and E series M68HC11 microcontrollers. 12 in total. For reasons I don't entirely understand E20 was the base model for this emulator.

I know a little bit of C. I took a summer class on it my sophomore year but I wasn't very good at it. I've been trying to brush up, but there have been too many things going on in my life to really dig into it . I used this emulator in a class a few years later without knowing anything about Assembly Language at the beginning of the semester, and by the end of the class there wasn't anything I could not do with this microcontroller. I'll admit, like >>961382
Is saying, I'm somewhat stuck in the past. There are two reasons I can justify my obsession with this microcontroller. One is that when you program a microcontroller in C you don't have a full understanding of what's happening behind the scenes like you do when you program in assembly. And 2, not every project needs the latest greatest processor. If I can make it work will something older and less powerful, why not do it? The caveat to point 2 is that this microcontroller is so old it's hard to find and kind of pricey.
>>
>>961357
>>961382
It's also worth mentioning that I'm not using this in a professional capacity. This is just something fun for me to do in my spare time.
>>
>>961382
Just out of curiosity, which microcontroller was this guy married to and why was he stuck on it?>>961382
>>
>>961495
>>961496
Im >>961357, I understand what your saying and like I said I respect where your coming from. Personally I still use a computer form 2003 as a server and a computer from 2008 as my main thing. Ive been using this mobo for so long that there's almost as many dead or dieing caps as there is working ones, well it seems like that anyway, and I do intend to replace them when I have the time and money.
I respect not needing some flashy new shit, IF you had these older ones sitting around.. like if you had a whole box of them, then Id condemn using a new microcontroller. but,
>Why not
is the part I dont understand. They cast 10 times more per unit. I dont know assembly so I cant say that its the same on new cheap controllers, but you can also program the new ones in assembly.
>>
>>961502
>>Why not
>is the part I dont understand. They cast 10 times more per unit. I dont know assembly so I cant say that its the same on new cheap controllers, but you can also program the new ones in assembly.

I'll first address the why not. While I have extreme amounts of fun playing with this microcontroller, it's unlikely I'll ever use this for more than 3 or 4 DIY projects in my lifetime. Even if I'm paying $20 a chip, that's not a huge investment. Second, you can technically program anything in assembly. It just so happens that I have the software, and every last drop of information related to programming a 68HC11 in assembly. If you want to program a raspberry pi (not a microcontroller, but my point is still valid) or an arduino in assembly there are just too damn many hoops you have to jump through. If you have a better option I'd seriously love to hear it.

This is a little bit of a divergence but, with Moore's Law likely coming to an end within the next 5-10 years, I think it's going to be more important to optimize code than it will be to learn the newest technology. Assembly offers the ultimate optimization. I could be wrong about this, but if things go the way I'm predicting, I'm future-proofing myself by holding on to the past.
>>
>>961517
I hope I didnt make you feel like you were being put on the spot, I was just really interested in your point of view. Yea I get it now that you dont plan on using allot of them.
I think Ill learn assembly at some point in my life, C has worked out good for me so far. I feel the same way about people that use python, I think thats a joke. Im sure you see C the same way.
Honestly I think its cool that you're sticking to these for hobby use. Id rather see these old chips get put to use than sit for eternity in a landfill.
>>
>>961517
>I'm future-proofing myself by holding on to the past.

I have clients ask me every day about upgrading their ancient equipment. It's not always the same answer. You have to look at the bigger picture.

This box controls the gates at a railroad crossing. It's been doing it's job for 30 years. Why upgrade? It still does it's job today.

Ask yourself what more could another system do in it's place. Instead of being a stand-alone-device, it could be networked, providing feedback, remote updates, less expensive maintenance, less down time.
Sometimes it's best to leave well-enough alone, sometimes it's best to re-think the situation.

If a new device can replace 255 old ones, it's really a no-brainer.

As for future-proofing, I hear you. Some critical things should not be made vulnerable. Giving skynet control of them would be the death of us all.
>>
>>961532
You didn't put me on the spot. I'm really enjoying this dialog. Haha, python. Don't get me started. But yeah, your python is my C.
C is by no means a bad language. This sounds counter intuitive, but I was disappointed when I bought an arduino and it took probably less than 5 lines of code to print a line of text on an LCD screen. I had a project in aforementioned class that required printing to an LCD screen. It took probably a few hundred lines of assembly.
I will admit that there are some things I'd rather program in C even with my limited understanding of the language. A long division algorithm, for example, can probably be done with one line of C. It would take most of an afternoon to write one in assembly if you were starting from scratch.
>>
>>961550
Your argument isn't falling on deaf ears. At some point you have to admit that newer technology beats older technology in almost every aspect. I feel that, like you mentioned, there is still some benefit to keeping an older system on life support as long as it's doing it's job and it isn't too costly to maintain. From a professional aspect there are people who would almost certainly pay a large amount of money to keep their current system maintained compared to the larger quantity of money it would take to replace their system with modern components. From an engineering standpoint it would be wise to keep both of these tools in your toolbox.
>>
>>961517
>This is a little bit of a divergence but, with Moore's Law likely coming to an end within the next 5-10 years
It's been coming to an end in 5-10 years for the past decade now.

>I think it's going to be more important to optimize code than it will be to learn the newest technology.
You're wrong about this and fundamentally. I'm going to dwell on this for a while, because it's important you understand how wrong you are.

Software only gets better. It can never get worse, because it never goes away.

For decades now, high-level languages have been compiled with optimising compilers. Optimising compilers do far more than just transform the language in machine-code by piecing together Lego-style building blocks: they have free reign to alter the program as they see fit, so long as it provably does the same thing.
>>
>>961517
Some of the most primitive optimisations are function inlining, constant folding, and loop unrolling. Say you wrote:

>int f (int i) { return (i + 2) }
>for (int i = 1; i < 5; i++) {
> j += f(i); }
>printf("%d",j)

A compiler from 2001 might notice function f is used in a loop and inline it:
>for (int i = 1; i < 5; i++) {
> j += (i+2); }
>printf("%d",j)

Then it might unroll the loop:
> j += (1+2);
> j += (2+2);
> j += (3+2);
> j += (4+2);
>printf("%d",j)

Then it might fold the constants:
> j += (3);
> j += (4);
> j += (5);
> j += (6);
>printf("%d",j)

A more advanced compiler would work in static-single-assignment form, which looks something like this:
> j1 = j0 + 3;
> j2 = j1 + 4;
> j3 = j2 + 5;
> j4 = j3 + 6;
>printf("%d",j4)

SSA is a representation where every label is immutable: it can be written once and only once, which makes it easy to reason about algebraically. An SSA compiler would simplify the SSA representation like this:
> j1 = j0 + 3;
> j2 = j1 + 4;
> j4 = (j2 + 5;) + 6;
>printf("%d",j4)

> j1 = j0 + 3;
> j4 = ((j1 + 4) + 5) + 6;
>printf("%d",j4)

> j4 = (((j0 + 3) + 4) + 5) + 6;
>printf("%d",j4)

> j4 = j0 + 18;
>printf("%d",j4)

Which is as optimised as it can get. You couldn't do better by hand.
>>
>>961517
Now, this was a trivial, contrived example. What makes compiler optimisation revolutionary is that the compiler can do this across the entire program at once. It can examine every site a function is called, and inline it in the places it makes sense to do so, and prove that it made the right decision. It can determine that in some specific case a value calculated in a library (and not even declared as a constant!) is /used/ like a constant and pull it in from a completely separate file by a completely separate author, yet determine that in a different case it does depend on some state and must be calculated.

It can be an expert on every processor, emitting binaries that detect what processor they're on at runtime and select blocks optimised for that specific processor.

It never overlooks anything, and it never makes mistakes.

>Assembly offers the ultimate optimization.
This is exactly the opposite of correct. The more you prescribe, the more you limit the optimiser, the more petty restrictions you bind it with, the worse code you get out. You can't outsmart an optimiser /now/ and they're only going to get smarter.
>>
>>961566
>>961568
I just got told.

I wouldn't be the first person to talk about Moore's Law ending, but I've seen the size of a transistor in modern processors. They are only a few atoms wide. We ARE reaching the atomic barrier, and that really can't be argued. Solutions to overcome that barrier may yet to be discovered but we are going to hit a point soon where a transistor can't get any smaller without being a single atom.

Someone probably wrote these algorithms in a high level language and reviewed the assembly it broke down into to check it for optimization. I'm not saying I personally can optimize better than a computer, but computers only do what their told. Someone somewhere is pulling the strings and this person knows intimately how a processor works.
>>
>>961553
>long division
printf( "%i", 10/2);

That will print '5'. assuming you dont want it printed then just use / for division and it just works. If you need the remainder use the % operator, 11%2 = 1 because it only prints the non whole number left over from the answer of two and a fifth
>>
>>961558
when we first brought electricity into the home people were happy to be able to have light in their homes without burning animal waste anymore.
Lives were improved, they were satisfied with a simple switch to turn the light on and off.

In those pre-inflation days they burned kilowatts without a second thought. But today's inflated dollar doesn't buy a nickels worth of electricity.

And try as they might to delay the inevitable by slashing oil prices to hold onto the petro-dollar, the currency will be expanded once again within 2016 making tomorrow's dollar worth half what it is today.
With that 2.5 cent dollar, you'll be glad you upgraded to LED lighting, and looking for more ways to reduce you electric bill that costs twice as many worthless dollars as before.

old power hungry micro-controllers will be demonized, and destroyed by the millions just like the fate of the CRT and 4 door sedans.

Laptops, tablets, and phones will replace the watt-mongering desktop.

OP will have to think long and hard about what corners he hast to cut in his budget, and how he can tighten his belt to be able to afford to power up his 68HC11 for even a few minutes.
>>
File: elmore1[1].png (137 KB, 448x361) Image search: [Google]
elmore1[1].png
137 KB, 448x361
>>961576
>We ARE reaching the atomic barrier, and that really can't be argued. Solutions to overcome that barrier may yet to be discovered but we are going to hit a point soon where a transistor can't get any smaller without being a single atom.
That's not what Moore's law states, though. It states that the /number/ of transistors in a comparable IC doubles every two years, not that the size of them halves.

ICs are getting more and more three-dimensional: multiple dies are getting layered in single packages, and dies are gaining more and more transistor layers. Flash, in particular, now resembles a tower block.

>Someone somewhere is pulling the strings and this person knows intimately how a processor works.
Hundreds of people are pulling the strings. But that doesn't really matter, because they improve the compiler once, and millions of people reap the benefit. Most of the advantage comes not from understanding machine code at the low level, but algorithms at the high level.

Once you've transformed the program into the most efficient abstract program it can be, you still have to convert it to actual machine code that an actual processor can execute. In the traditional compiler pipeline, this is done in the "register allocation" and "instruction selection" stages. The thing about these stages is that you can actually brute-force them: try every possible way of allocating/spilling registers and model the real-world performance, then choose the best; test every possible way of combining instructions to implement each abstract step and pick the fastest.

Wisdom like this doesn't actually require a human skilled in assembler, just an accurate model of the processor and lots of compute-time to waste.
>>
>>961581
still driving a 4door sedan and using incandescent light bulbs.
>>
>>961576
Aside, the main reason hand-optimised assembler isn't used anymore is the difficulty in proving that it's correct.

If your software controls a nuclear reactor, you want to be able to prove (in the mathematical sense of the word "prove") that it always operates the reactor safely. There are languages that let you do this: you provide a formal specification, and it generates code that implements it, shows its working, and shows why that code is correct. If you specify that (immensely-simplified example) the control rods must always go down when the temperature is above 300 degrees, it can show you a mathematical proof that this always happens.

Similarly, if your software controls a jet fighter, you want to be able to prove both that it works both correctly, and that it always hits its deadlines. There are realtime languages that let you specify constraints like "this loop must run every n milliseconds and must take no more than m milliseconds to complete", and they will use their intimate knowledge of the target processor to show that, no matter what path the execution takes through the control structures, the correct numbers always arrive on time*.

It's technically possible for a human to do this (and, indeed, a human has to do this to the compiler itself, or there'd be no way to trust its output), but the labour involved is immense, and the risk of human error difficult to mitigate.


* aside aside, you can't prove this in general. If a language is sufficiently powerful, it is not possible to determine if its execution will /ever/ end, let alone how long it will take. This is called the "Halting Problem". There are languages that are deliberately engineered to not be that powerful, so their execution time can always be calculated. https://en.wikipedia.org/wiki/Halting_problem#Avoiding_the_halting_problem
>>
>>961580
Right. It's easy in C. Doing it in assembly, especially if it's a multiple digit base 10 number, is a huge but gratifying pain in the ass.
>>961581
I actually lol'd.
>>961583
>ICs are getting more and more three-dimensional: multiple dies are getting layered in single packages, and dies are gaining more and more transistor layers. Flash, in particular, now resembles a tower block.
This won't take us as far as you might think. The reason we don't make processors bigger than they currently are (length x width) so we can fit more transistors is because electrons take a certain amount of time to go from one point to another via silicon. If we make them any larger the electrons don't reach their destinations within the expected time period and the processor behaves erratically. Expanding into the 3rd dimension still lengthens the path of the electrons so our ability to expand in the 3rd dimension is severely limited. This doesn't even take into account the reduced heat dissipation of a thicker processor.
>Once you've transformed the program into the most efficient abstract program it can be, you still have to convert it to actual machine code that an actual processor can execute. In the traditional compiler pipeline, this is done in the "register allocation" and "instruction selection" stages. The thing about these stages is that you can actually brute-force them: try every possible way of allocating/spilling registers and model the real-world performance, then choose the best; test every possible way of combining instructions to implement each abstract step and pick the fastest.
Very interesting point. Sometimes it really does pays off to do things halfassed a million times until you stumble upon the best solution. If engineering were an MMORPG I'd call this the zerg solution. Another tool for the toolbox.
>>961585
I lol'd again.
>>
>>961591
Are you a computer science major?
>>
>>961604
>The reason we don't make processors bigger than they currently are (length x width) so we can fit more transistors is because electrons take a certain amount of time to go from one point to another via silicon. If we make them any larger the electrons don't reach their destinations within the expected time period and the processor behaves erratically.
We reached that point ten years ago: the Pentium 4 pipeline includes "drive" stages that literally do no calculation, just send data from one side of the processor to the other.

The key insight in the past decade has been to embrace parallelism: if your code can use twice as many threads to go 1.5 times as fast, then instead of designing a single core that's 1.5 times as fast (which is really really hard), you only need to design a die the same size with twice as many cores (which is comparatively easy).


The reason processor dies are processor-die-sized is more that single defects can wipe out the entire die, so if a wafer has n dies and m defects, the number of usable dies is (n-m). Smaller dies, better yield.
>>
>>961614
Interesting you bring up the P4 with this example. That processor got BTFO by AMD. That was Intel's plan. Pipeline the shit out of the P4. It worked well in theory. Not so much in practice. I feel like your die size argument has relevance, just not to the current discussion at hand. Having a smaller die size will result in a higher yield and therefore greater profit. But that doesn't change the fact that we can't cram more transistors into a chip by making it larger and still expect it to run at ~4GHz.
>>
>>961622

as layer upon layer increases the vertical landscape, maintaining shortest distance, while allowing for cooling has caused the processor to take on a spherical shape. "a hairy, smoking golf ball"
>>
File: pic 1.png (997 KB, 1920x1080) Image search: [Google]
pic 1.png
997 KB, 1920x1080
Back on topic here, does anyone know a way of recreating pic related? This is the prototype end of the ribbon cable with 52 pin PLCC attachment head. The PLCC attachment has some kind of problem with multiple pins intermittently shorting to ground. If nothing else works I can just run 52 Male to female jumper wires from the emulator to breadboard. It would be nice to have this in a streamlined package that didn't look like shit.
More pics of the faulty part incoming.
>>
File: pic 2.png (958 KB, 1920x1080) Image search: [Google]
pic 2.png
958 KB, 1920x1080
>>
File: Bidirectionalring[1].png (161 KB, 1240x891) Image search: [Google]
Bidirectionalring[1].png
161 KB, 1240x891
>>961622
I didn't say the P4 was a good idea, just that your radical ideas about clockspeed, distance, and the speed of light had already occurred to Intel fifteen years ago.

>That was Intel's plan. Pipeline the shit out of the P4. It worked well in theory. Not so much in practice.
No, Intel's plan was to use tiny transistors and massive clockspeeds. They thought that as they shrank the transistors, the threshold voltage and leakage current would have a linear relationship, and therefore if you halved the transistor size, you could double the clockspeed yet maintain the TDP. The reason it didn't work was that below a certain size of transistor the relationship between threshold voltage and leakage current was *not* linear, and as they shot for higher and higher clockspeeds, the TDP went through the roof.

Across the P4's lifespan, they quadrupled the clockspeed, yet far from staying the same, the TDP more than doubled, and by the time they were ready to put a fork in it, the chips put out over a hundred watts.

All this has nothing to do with the pipeline architecture, other than that the large feature size of the day meant short stages were necessary to stop functional units being physically larger than the distance an electron could move in one clock.


>that doesn't change the fact that we can't cram more transistors into a chip by making it larger and still expect it to run at ~4GHz.
That's where you're absolutely, utterly wrong. The speed-of-light problem limits the size of a pipeline stage (and therefore a core), not a processor. And cores are showing no sign of getting larger.

What enables modern chips to hit 4GHz when 2005's chips broke down and died is innovative transistor design*, that reduces leakage current. This is a property of the actual transistors, and is orthogonal to the higher-level design.


* https://en.wikipedia.org/wiki/Multigate_device#Tri-gate_transistor
>>
File: pic 3.png (960 KB, 1920x1080) Image search: [Google]
pic 3.png
960 KB, 1920x1080
It know it looks a little strange but it's meant to resemble a 52 pin PLCC chip so that it can plug into a PLCC socket in the device your prototyping. It's basically the "male" end of a PLCC connection if such a thing exists.
>>
>>961634
This guy would do the job, no?

http://uk.rs-online.com/web/p/plcc-sockets/2446903/
>>
>>961637
This debate is becoming so complex it's getting hard to keep up. If what you're stating is accurate, and I can think of any reason it's not, then it would appear there is no limit to the size we can make a processor so long as we stuff it full of tiny cores. If this were the case why have we not done this yet?

>and is orthogonal to the higher-level design.
I'm not sure what you mean by this.
>>
>>961643
That would be a good start at least. I'd still need to find a PCB that will accept this and route to 2 .05 pitch male headers if at all possible.
Being that this things broken anyways I may just try to desolder/resolder it and see what happens.
>>
>>961646
>If this were the case why have we not done this yet?
Because of yield issues, as detailed earlier.

Also parallel programming is hard, and many problems are not parallelisable.

3d rendering is parallelisable, massively so, which is why Nvidia has produced some of the biggest dies in the history of humanity.

>I'm not sure what you mean by this.
Orthogonal:
4 - (software engineering) Of two or more aspects of a problem, able to be treated separately.
https://en.wiktionary.org/wiki/orthogonal#Adjective

If you built a 486 out of 14nm FinFETs, you could run it at 4GHz too. The clockspeed is for the most part dictated by the design of the transistors, not what you build with them. The high-level design only starts to influence clockspeed when you make a pipeline stage physically larger than a light-clock, which modern designs tend not to do.

If planar transistors had continued to scale in clockspeed the way they had in the '70s, '80s, and '90s, then pipeline architecture would be crucially important in determining clockspeed. But they didn't, so it isn't.
>>
>>961651
We're beyond the point where I can discuss this intelligently. I enjoyed the debate and I appreciate the info.
Is there a modern microcontroller you'd recommend? I could pick any one of them and start playing with it, but is there one you think is particularly robust or easy to use?
>>
>>961654
You'd probably enjoy the PIC family. They're all really cheap, and people commonly program them a variety of ways, including Assembler.

They're probably the common modern microcontroller that most closely feels like the one you're using at the moment.
>>
File: 1280px-PIC16CxxxWIN.jpg (230 KB, 1280x960) Image search: [Google]
1280px-PIC16CxxxWIN.jpg
230 KB, 1280x960
I've yet to find a definite source for the 68hc11e20 but I'm still waiting to hear back from a distributor. I have found a source for some 68hc711e20's It's literally the same chip with one time programmable EPROM. It's one time programmable because it doesn't have a window into the package to erase it with UV light. Would it be possible to do pic related to one of these chips without destroying the chip? As a reminder the chip I'll be buying is a 52 pin PLCC package.
>>
>>961954
You can etch epoxy without destroying the chip (iirc with very hot concentrated sulfuric acid), but this simply can't be your best option.
If you insist on using this processor and can't get non-OTP parts, could you use an external program rom instead?
>>
>>962162
The short answer is I don't know. From looking at the data sheet it appears there are some chips in this family that don't have any onboard storage. There's a chapter in the reference manual about communicating with other chips but as of yet I don't fully understand it. I'm hoping to get this figured out soon.
Thread replies: 49
Thread images: 10

banner
banner
[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vp / vr / w / wg / wsg / wsr / x / y] [Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
If a post contains personal/copyrighted/illegal content you can contact me at [email protected] with that post and thread number and it will be removed as soon as possible.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com, send takedown notices to them.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.