Register a SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«18 »
  • Post
  • Reply
Sep 18, 2004

tranquil consciousness

owlofcreamcheese is a bad poster

I am sure there are many reasons why he is a bad person but the fact that most of D&D can list trivial aspects of his boring life and that he feels unfufilled while living in Maine and the endless details of his interactions with the opposite sex which occur mainly through internet dating sites which he provides in a secondary attempt to garner the attention and support he does not he does not receive as a result of his primary purpose in life, that being posting excerpts of popular science articles, all of this is at best subsidiary to why he is a bad poster

the absolute worst thing about oocc is that he has no clue about 99% of the science articles he drools over and when confronted with someone who threatens to puncture his engorged techno-fetish with the scalpel of knowledge he protectively sprays vitriol and idiocy about the thread

some people might label him a cargo cultist but he exists on a moral level below those happy plane-constructing peoples: their slavish and time-consuming imitations of the trappings of technology were intended to help assuage the suffering of the tribe as a whole; oocc has zero desire for anything other than calming the troubled waters of the ocean that is his lack of self-respect

the current D&D thread is a microcosm of oocc's interaction with science and technology

1. Start with apopular science magazine article, blog post or foreign-langauge newspaper. Peer-reviewed articles are verboten because they are hard to read and not cool enough. scientists extract image from brain, yes this article from Mainichi Shimbun is perfect

2. Insert OOCC characteristic comment, praising great techno-priests, whose divinations and machinations will improve us all any moment now and whose plans and workings are above mere mortals

Owlofcreamcheese posted:

I have to say this is one of the most shocking technological advances I can think of and I absolutely did not see this coming.

3. as the thread progresses, comment OOCC pontification as to the ramifications of this scientific advance that OOCC does not in any way understand. Control of verbs must be absolute, 'suspect', 'presume', and whenever possible, 'imagine', because we do not know poo poo, as is our place

Owlofcreamcheese posted:

I imagine sound would be possible. maybe even easier when all is said and done.

I suspect vision is the initially simplest since it probably does have sort of a "frame buffer" of everything being individual 'pixels' even if they are all physically jumbled up.

I also suspect this is currently a slow process and would not capture sound simply due to it taking time to get a reading out.

4. Oh no! Someone whose reaction to the article incorporates actual knowledge, and who fails to demonstrate the requisite awe and appreciation of the techno-gods! An interloper who dares to cite the hidden knowledge, the authentication required writings that mortals must not touch! BLASPHEMY!

Dedekind posted:

For those with online journal access, the article is here.

I'm a little confused why people are so impressed by this. They're doing this in primary visual cortex, which is spatiotopically organized, with neurons tuned to luminance in specific regions of the visual field.

Lastly, for various reasons, we just understand visual processing better. We have a much shakier grasp on "what matters when you're trying to recognize sounds," and are completely lost when it comes to olfaction.

The best other candidate for this approach would be somatosenation (touch), but it seems to be the red-headed stepchild of sensory processing, unfortunately.
It dares to dash the imaginings, the sweet sweet not-knowings! How dare it?

Owlofcreamcheese posted:

Are you serious that you don't see this as impressive?
give it one last chance to turn into the light

5. Retribution - the blasphemer, uncontrite, must be punished mightily

Dedekind posted:

I feel a certain professional obligation to be the sad cloud raining on the parade, pointing out that this is really only a minor technical advance within a currently-established, active field of neurobiological results which already has a broad body of impressive results. A healthy sense of perspective might keep neural imaging from being the flying car of the modern generation.

Owlofcreamcheese posted:

Your deluded to think thats a cloud on the parade though.

Instead of being all "heh, nothing new, losers, wake me up when it's something cool" why not help the discussion by linking us up all the other studies that your so familiar with.

Who else has done this stuff? what results have they gotten? your own link seems to imply past success had only been in recognizing between several pre-made images.

The last I knew about this technology was injecting cats with dye then killing them and bringing out a slice of brain. but if your familiar with this technology why not tell us all about it, instead of telling us how unimpressed you are by the fact that there is other people doing this, instead of telling us all the other people that have had success like these people have.

your link says "Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories." is this false?
yeah, gently caress you false prophet

why not help us along by linking the entirety of neurophysiology while you're at it




how about I select single lines out of the sacred writings, can you justify them?

didn't think so


6. and so forth

these interactions have happened so often that oocc has begun to himself recognize the existence of a pattern at the heart of which lies something dark and distressing, but in excellent form he displaces this hatred onto those nasty oppressors who dare to, while educating the thread as to the context and import of these magical occurences, draw away the holy awesomeness that is for him the sole reason for the existence of science and technology and possibly oocc himself

pangstrom posted:

I think sometimes these advances seem boring because they're just incremental and we knew they were going to happen eventually. They're not creating a new understanding or changing the way we look at anything, they're just kind of a parlor trick. Visuotopic map + way to measure at neural activity = this is inevitable. Still it's a milestone of sorts and pretty neat.

Owlofcreamcheese posted:

That always pisses me off. Our culture has this thing where if something isn't 100% absolutely new then it's not anything.

It also has this thing where if something is 100% absolutely new it's not anything because it's just ivory tower halls of academia stuff where we will wait and see if anything comes of it.

Literally nothing is allowed to be interesting! science is boring by definition!
Why do you people hate science! The joy of startled ignorance must be endless! Marvel, do not think! I have contributed nothing of import to this thread outside the original quoted article but I maintain a sense of proprietorship and will derail any actual scientific discussion to complain that you 'context-putters' are dark souls who destroy the joy of others!

Owlofcreamcheese posted:

Yeah, but dude, fMRI itself is like, what? 15 years old? something like that, how fast do we have to get bored of things?

However old this is, it's not very old. You haven't really presented anything that backs your claim this is just a different algorithm anyway, it seems they have done something fundamentally new. that they ARE the first to get a real image out. And past work either involved surgery or could only differentiate.

It seems unfair if your only criteria for it being uninteresting is that it has precursor technology. more so if the precursor technology is also new enough a thread about THAT wouldn't be out of line. What do you want to be impressed? Why are you so unimpressed you don't understand how anyone else could be?

7 and so on.

I wasn't going to take this out of the WD&DP thread but then this happened:

Eronarn posted:

But we can already see their thoughts! We've been able to for years. What's different about this is that the neurons are laid out in a way that makes it easy to get more specific details. We can now see an 'A' instead of 'a', but before this we could already figure out whether someone was visualizing an 'A'. While this new development is a neat trick, I disagree with your claim that vision is one of the more important areas to have organized into an easily accessible structure. That's not true either for reading from it or signaling to it, and it would be much more interesting if memory or motor skills or consciousness were organized that way instead.

Owlofcreamcheese posted:

Eronarn posted:

But we can already see their thoughts! We've been able to for years.
I don't believe so!
(yes that is the entire of OOCC's post)

Eronarn posted:

Yes, what a huge jump. In one you have to predefine objects and n the other you have to predefine levels of brightness.

And that's just literally the first citation on a search for 'fmri' 'thought'. They aren't doing exactly the same thing I mentioned, but that's not due to technological limits, they were just researching something else. So in closing, shut the gently caress up OOCC, you don't know anything about the topic. It's okay not to know things, but it isn't okay to talk about things you don't know about as if you do!

Owlofcreamcheese posted:

And man landed on the moon 40 years ago, woop dee doo! When did we get so cynical that once a week passed we stopped being allowed to talk or care about something?

Everything has precursors, they are exciting, later versions are exciting too, this is exciting. If you don't agree fine, but showing something is well studied makes something MORE exciting than some random one off study.

tonelok posted:

Fallom posted:

Eronarn posted:

So in closing, shut the gently caress up OOCC
Since when do mods allow this trash in D&D?
They don't.
and while oocc may not be good at breadth of scientific knowledge or critical thinking or independent study or contextual argument or demonstrating the theory of mind to successfully encounter people less excited then him without entering into a spiteful depressive spiral he's really good at turning on a dime in the face of administrative power and casting himself as the offended party

Owlofcreamcheese posted:

Jesus Christ can we stop talking about if this technology is new, 9 months old or 3 years old?

We haven't had a thread about it in any of these cases.

In conclusion and since this is not D&D, gently caress you, Owlofcreamcheese, shut the gently caress up, and stop trying to snort up that gentle sense of wonder that shimmers over the wavefront of scientific progress as though it was a natural anaesthetic for the many hurts that are the qualia of being you

also stop spamming otherwise interesting threads with randomly selected popular science articles in an attempt to get attention and if you absolutely have to keep talking about science, setup a paypal account and see if the people who tolerate your presence will divvy up the $195 for a personal subscription to Nature


Apr 16, 2005

Tits and cooters, Mr Bond.

i bet hes fat

Power of Pecota
Aug 3, 2007

It is not that bad, there is hope, there is charity, there is compassion blah blah blah Charles Dickens three ghosts visit Scrooge and he wakes up to life blah blah blah

Cefte posted:

Why do you people hate science! The joy of startled ignorance must be endless! Marvel, do not think! I have contributed nothing of import to this thread outside the original quoted article but I maintain a sense of proprietorship and will derail any actual scientific discussion to complain that you 'context-putters' are dark souls who destroy the joy of others!

Jul 15, 2007

I behave like a self-involved babychild and my posts are worthless.

Sexual objectification and racism are okay with me as long as we talk about the pretty colors.

I am an adult living in an age of enlightenment who chooses ignorance and demands it in others.

Cefte posted:

owlofcreamcheese is a bad poster

the current D&D thread is a microcosm of oocc's interaction with science and technology

(quotes n'such)

woulda been sufficient

Hardcore Sax
Oct 11, 2004

i am incapable of conceiving infinity, and yet i do not accept finity

he works as a techie in a maine elementary school, this qualifies him to speak on every scientific topic ever

Reaganball Z
Jun 21, 2007
Hybrid children watch the sea Pray for Father, roaming free

oocc sucks because he thinks technology for its own sake rather than education is important

in conclusion

I hate your stupid posting and your stupid lion avatar

Mar 3, 2004


Cefte posted:

a natural anaesthetic for the many hurts that are the qualia of being you


Democrat Death Tax
Jan 19, 2008

Owlofcreamcheese posted:

sometimes I wish I would turn out gay so I could reset my secret crush counter back to zero.

Owlofcreamcheese posted:

Actually I don't know if I ever told this story but in 2002 some really horrible things happened, Some grant that was paying for school was like "oh your grades aren't good enough because this is an incomplete" and I was going to have to take the semester off/drop out and one of my friends was sick and it seemed like it was cancer, and just a lot of stuff like that.

Then on the fourth of july there was some crazy rear end weather event where the sky was a really weird bright green color right before a huge thunderstorm and the air was all thick and sound was all distorted. And after the thunderstorm like... the grade came in and the grant came back, the friend didn't have cancer and I got a phone call from the girl I liked in highschool who I hadn't talked to in like 2 years asking me out on a date absolutely out of the blue.

I'm not religious or superstitious enough to think anything but a weird thunderstorm and a lot of amazing coincidences happened, but I always feel like maybe I lived my whole life and at the crappy end of it I got a chance to identify where my life went wrong and make a few edits and start back from the day the sky was all weird and get a second chance.


Nov 27, 2005

kingcobweb posted:

i bet hes fat

Mar 27, 2004

He seems like the type of guy to defend himself in here. At least I hope he is.

Nov 27, 2005

i can't help but get a son of sam vibe from this picture

Jun 5, 2001

Cefte posted:

owlofcreamcheese is a bad poster

Tax the Poor
Nov 5, 2008

by Fistgrrl

yeah and his names dumb

Tape Speed
Aug 3, 2005

by T. Finn

for most goons science is more of a belief taken on faith than any real discipline born of understanding, and owlofcreamcheese is the logical extreme of that faith, the backwoods pentecostal preacher

Nov 27, 2005

Tax the Poor posted:

yeah and his names dumb

an owl...of cream cheese?

also an internet search revealed the old klavernwiki which had a page about him


The phrase "so open-minded his brain falls out" was coined by time-travellers who looked at the following list of Owlofcreamcheese threads.

* In a certain light nuclear war would be thrilling
* Think of the children!
* naturalistic fallacy, why?
* Bush Threatens Veto of Costly Bill
* Worse crime = more years in prison.
* How would the end of cancer change society?
* Those who would sacrifice liberty for security deserve neither
* Sarah Connor, Sarah Connor, Sarah Connor, is scifi bad for sci?
* Do people act like religion is real?
* What killed God?
* Is the dislike of evolutionary psychology anti-Copernican
* Text based language
* Will internet history follow a person forever
* What do you think the ultimate outcome of obesity will be?
* How is acid rain doing these days?
* Will 'universal constructors' ever matter
* Elves and Orcs

Lamestream Media
Dec 2, 2007

by angerbotSD

if this image:

fills you with nausea, you may have at some point read an owl of cream cheese post

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

my argument is as follows:

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

One of the latest fads in optics is something called meta-materials. Although there aren't any practical optical elements based on meta-materials yet, this has not stopped theorists from leaping ahead and designing all sorts of fun things. Recently, the New Journal of Physics published a special issue devoted to the theory of so-called transformational media (the theorists' name for meta-materials). I don't have the time or energy to slavishly report on every article, but I will summarize a couple that attracted my interest. In this post, we'll take a look at the super-antenna.

But first, a digression: why the hell do theorists call meta-materials transformation media? It is because of a crucial insight. A material that refracts or reflects light can be replaced by a curved layer of space without the material. Thanks to this insight, instead of trying to come up with a shaped piece of material that will manipulate light as you would like, you can work backwards by figuring out how space must be bent to produce a particular optical response. With that answer in hand, the optical properties of the material can be derived from the shape of space. The importance of this approach is that, if you can imagine something you want light to do, you can design a curved space to achieve that. Optics once unimaginable are now being seriously pursued.

Among the flurry of activity, the super-antenna has started to gain some attention. Light antennas are not really new, but this one is a bit different. It would, in principle, take in a large swath of light from a range of directions and re-emit it in a laser-like beam. This is achieved by bending a 2D space into a creased heart shape.

Light hitting this heart shape will be smoothly bent until it exits from the pointy end of the heart. From the point of view of that space, the light is actually traveling in a straight line—just following the curves of bent space. The super-antenna can accept radiation from nearly any angle of incidence and it will always exit from the point—although it will not necessarily take the shortest path to get there from the point of view of normal space.

A super-antenna could be important because it will convert spatially incoherent light into spatially coherent light. Lasers are the main method for efficiently creating spatially coherent light, but, depending on the laser, they are not all that efficient. Laser diodes clock in at around 30 percent efficiency at the top of the range, while others struggle to obtain one percent efficiency—I used to work with a laser that had a 45kW power supply and produced a whopping 1W of laser light.

The point is that there are many applications that require spatial coherence where a laser is simply overkill. Furthermore, we have lots of ways of making incoherent light over large portions of the electromagnetic spectrum. Because it could handle broad-spectrum light, even if the super-antenna weren't that efficient, it would still find a huge range of applications. For the chemists among you, imagine high-resolution, Fourier transform infrared spectroscopy without averaging.

The biggest problem with this approach is that it leaves one critical question unanswered—how do we construct materials that have properties not found in nature? So far, this is being done by making composite materials, with layers about a wavelength thick, meaning that light is influenced by the structure and behaves as if it were propagating in a homogeneous material with very different properties. The implementers call these meta-materials, which brings us back to the apparent disconnect between theorists and experimentalists.

Although the authors offer no clue as to how one might start constructing a super-antenna—they even note that the dielectric constant needs to be infinite at the crease of the heart—this doesn't mean that a non-ideal super-antenna cannot be constructed.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Our current, best theory that explains the large scale properties of the universe is called the Lambda-Cold Dark Matter (ΛCDM) model. This model has proven to be capable of explaining a variety of observable phenomena, including anisotropies in the cosmic microwave background radiation, the large scale structure of the universe, and observations of supernovae that suggest that the universe's expansion is accelerating.

The ΛCDM model, while extremely useful, explains the accelerating expansion via the inclusion of an exotic type of energy termed dark energy. This explanation, captured in the Λ term, describes the current universe's content of mass-energy as consisting of 74 percent dark energy. Currently, the only experimental evidence for dark energy is the accelerating expansion of the universe.

This need for an unknown type of energy, and the fact that it is believed to make up such a large portion of the universe, does not sit well with all cosmologists. In recent years, some papers have postulated that there is no such thing as dark energy; instead they suggest that we are living at or near the center of a large, low-matter density void that could extend a few mega- or gigaparsecs from Earth. If we were to exist in a void, then the apparent accelerating expansion of the universe would be explained by the fact that the data that's been viewed as indicating this acceleration was misinterpreted.

In a paper set to be published in an upcoming issue of Physical Review Letters, a team of physicists look at how well various void models can really do at predicting/explaining observed phenomena that the ΛCDM handles. The team examine two classes of void models of the universe, which they term constrained and unconstrained. The constrained model is one where a mathematical limitation is put on the density distribution found within the void.

CMB anisotropy power spectrum

For different void models, the researchers computed various observables that the model should be able to reproduce. They first compute the CMB anisotropy power spectrum, which describes small-scale perturbations that occurred early in the lifetime the universe. This data is readily available from a variety of experiments, such as the Wilkinson Microwave Anisotropy Probe and related surveys.

The authors point out that since the CMB anisotropy power spectrum contains so much information, it "can potentially provide strong constraints on void models." The team combined the CMB information with supernovae data to see how well void models can describe reality. They also constrained the void models by making sure they could adequately fit the baryon acoustic oscillation (BAO) data—data that represents a signature of the early universe.

The researchers found that unconstrained voids can indeed fit the supernova and CMB data, but, that fit comes at the expense of the Hubble constant. The Hubble constant is a measure of how fast the universe is expanding, and is estimated to be 70.8±1.6 km*s-1*Mpc-1. For the unconstrained void models to get the CMB and SN data correct, they require the Hubble constant to be a very low 44±2km*s-1*Mpc-1.

The work also demonstrated that the authors' derivation of a model-independent radial BAO formulation poses a huge constraint for void models, but fits quite nicely with the ΛCDM model. The authors conclude the article by stating how surprising it is that the ΛCDM model is capable of describing such a wide variety of observations with so few free parameters. They state that "Losing this predictive power [of the ΛCDM model] and requiring a fine-tuned primordial spectrum is a severe price to pay for the allure of Λ=0. [the removal of dark energy]"

While the authors of the paper feel that void models are, and will be, deficient in describing the nature of our universe, they agree with past papers promoting the concept of void models when they state that further information will be useful. Both sides, pro- and anti-void, state that upcoming missions will help resolve this open question.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

When a car's underbody or a ship's hull begins to corrode, it usually ends up junked. New protective coatings developed at the University of Illinois heal over their own scratches with no external intervention, protecting the underlying metal. The self-healing elements, enclosed in microcapsules that rip open when the coating is scratched, are compatible with a wide range of paints and protective coatings. The coatings, being marketed by Autonomic Materials of Champaign, IL, may be on the market in as soon as four months.

The materials, described online this week in the journal Advanced Materials, were developed by Paul Braun and Scott White, both professors in the Beckman Institute at the University of Illinois at Urbana-Champaign. The self-healing system consists of two kinds of microcapsules: one filled with polymer building blocks, the other with a catalyst. Because the capsules, made of polyurethane, keep the reactive chemicals inside isolated, they can be mixed into a wide range of coatings. When the coatings are scratched, the microcapsules are torn open and their contents flow into the crack and form siloxane, a polymer that Braun likens to bathroom caulk. Unlike other self-healing systems, the Illinois coatings don't require elevated temperatures or moisture to mend.

The Illinois researchers scratched steel plates, some coated with the material and some with a conventional coating, then immersed them in salt water for five days. The metal covered by the new coating was protected against rust, while scratches in the conventional coating allowed significant rusting. "They make a very compelling case that the system is working as advertised," says Christopher Bielawski, an assistant professor of materials science and engineering at the University of Texas at Austin.

Bielawski points to the practical aspects of the Illinois coatings, which are made up of cheap, readily available chemicals. And Braun says that the new additives could be used in a wide range of applications in coatings that are cured up to temperatures of about 150 °C. The group demonstrated the self-healing system in various coatings, including in a commercial military ship paint.

Most work on self-healing materials, including those developed by the Illinois group, has been aimed at incorporating them into various structures, restoring mechanical properties to walls so that they won't crumble or to airplane wings so that they won't fracture. The key to the coating technology, says Braun, was encapsulating the catalyst. If unprotected, the catalyst could degrade the coating itself; encapsulating it makes the system compatible with a wide range of paints and coatings.

These paints won't be suitable for places where aesthetics are important, like the top of a car, says Braun. The capsules are 10 to 100 micrometers in diameter, so thinner coatings incorporating them would be rough to the touch. Larry Evans, CEO of Autonomic Materials, says that the first target markets include industries in which performance is key, such as ships, oil rigs, and pipelines, where metals are exposed to harsh environments and taking systems offline for frequent repainting is costly. Evans says that the self-healing system is ready for commercialization and that the company has partnerships with major coating companies.

Tape Speed
Aug 3, 2005

by T. Finn

CrumFUNist! posted:

also an internet search

jesus just the first two google pages cover every major nerd base: accounts on youtube, slashdot, wikipedia,, okcupid, livejournal...

Franklin W. Dixon
Aug 7, 2004

by DocEvil

oocc cites popsci articles like calenth cites wikipedia and its just as moronic

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

The innate intelligence of ants is helping Australian-based scientists develop prosthetic limbs that respond to brain signals in groundbreaking research that could change the lives of amputees.

The technology, created by a team of five researchers from the University of Technology Sydney (UTS), mimics the myoelectric signals used by the central nervous system (CNS) to control muscle activity.

Complex algorithms model the so-called "swarm-intelligence" used by ant colonies to locate food. Artificial intelligence researchers have long used the complex interactions between ants to construct a pattern recognition formula to identify bioelectric signals, which can then be applied in live human trials.
I don't think the crossover from science fiction to science reality is that far away now

PhD student Rami Khushaba, from the UTS Faculty of Engineering and Information Technology, said the behaviour of social insects, like ants, allows scientists to understand the body's electrical signals and use the knowledge to create a robotic prosthetic device that can be operated by human thought, like a flesh and blood limb.

"I don't think the crossover from science fiction to science reality is that far away now," Khushaba said.

Khushaba is developing the mathematical basis for identifying what biosignals relate to particular arm movements and where electrodes should be placed. Swarm intelligence algorithms were chosen for their abundance in nature and because they use multi-agent techniques to solve a specific problem.

“We can use the behaviour of the ants to enhance the quality of the control systems that we employ with the robotic limbs. The biggest problem in our field is the amputee acceptance rate — they are disappointed if the system is not fast and accurate enough,” Khushaba said.

The researchers create a map of the voluntary intent of the CNS called an electromyogram (EMG) by attaching sensors to the human forearm — or what remains of it after an amputation.

Khushaba said the technology uses wave-length transforms to extract the valuable information from captured data. “We do many pre-processing techniques, we filter them, clean the data, and start to extract the important variables,” Khushaba said.

A simple microprocessor is mounted inside the hardware within the artificial limb – in this instance a forearm. While Khushaba would not be drawn on the cost of the system, he said it will not be expensive. He said the prosthetic limb will be available within two years once a manufacturer is found.

The team collected data on 10 movements using 10 variables of the forearm from six subjects and achieved a 99.9 percent accuracy. Khushaba said the movement of the forearm is captured and filtered against the variables to minimise processing time.

Only a few seconds of data is required to capture and train the system to identify patterns in the raw data during the online testing phase. The entire system operates on the Matlab programming language designed for technical computing.

“You get signals from each censor mounted on the forearm, and we describe these by variables. Then we have an objective function to get the best accuracy and we select the minimum number of these variables that will give us the highest classification accuracy,” Khushaba said.

He said the biggest challenge to the success of the robotic limb is maintaining system accuracy and speed. The EMG system extracts the signals from human muscles in less than 256 milliseconds.

"We hope accuracy will improve. It will be the very near future when amputees, who can still imagine moving a lost limb, will have access to a device that can truly respond to their intentions."

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Intel said Wednesday that it has completed the development phase of its next manufacturing process that will shrink chip circuits to 32 nanometers.

The milestone means that Intel will be able to push faster, more efficient chips starting in the fourth quarter.

In a statement, Intel said it will provide more technical details at the International Electron Devices Meeting next week in San Francisco. Bottom line: Shrinking to a 32 nanometer is one more step in its “tick tock” strategy, which aims to create a new architecture with new manufacturing process every 12 months.

While that strategy is unmatched, it’s unclear whether customers will see Intel’s tick tock plan as a huge selling point in a downturn. For instance, AMD executives have quietly begun highlighting the risks involved with shifting architectures so quickly. During its Shanghai server chip launch one of AMD’s big selling points was that customers only needed an easy BIOS update to upgrade. There’s a good reason for that though: It’s nearly impossible to keep pace with Intel. AMD is also trying to shift the playing field to virtualization and power consumption.

Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can’t resist.

As far as the nuts and bolts go, here’s how Intel described its upcoming paper:

The Intel 32nm paper and presentation describe a logic technology that incorporates second-generation high-k + metal gate technology, 193nm immersion lithography for critical patterning layers and enhanced transistor strain techniques. These features enhance the performance and energy efficiency of Intel processors. Intel’s manufacturing process has the highest transistor performance and the highest transistor density of any reported 32nm technology in the industry.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Scientists are developing new ways to selectively boost gene expression in the brain, in the hope of treating psychiatric and neurological disease. A growing pool of evidence shows that compounds that target this mechanism can improve learning and memory in rodents. But existing drugs, which were not developed for this purpose, are relatively weak and unselective, and their long-term safety is not yet clear.

Over the past few years, neuroscientists have begun to recognize the importance of epigenetics--molecular processes that change the expression of genes without altering DNA--in the brain, and in memory in particular. One of the key regulators of epigenetics is a group of enzymes known as histone deacetylases (HDACs), which trigger DNA to wind more tightly around neighboring proteins, ultimately dampening gene expression. Recent studies have shown that existing drugs that inhibit these enzymes can enhance learning in both normal mice and those that are cognitively impaired.

"I think the implication for human disease is really exciting," says Li-Huei Tsai, a neuroscientist at MIT. Last year, Tsai's group showed that giving brain-damaged mice an HDAC inhibitor allowed them to recall lost memories.

EnVivo Pharmaceuticals, a drug company based in Watertown, MA, is developing HDAC inhibitors that are more potent than existing ones and can easily enter the brain. (Valproic acid, for example, a drug used to treat epilepsy and bipolar disorder and currently being tested for cancer, is a relatively weak inhibitor.) According to results presented at a neuroscience conference last month, the company's lead HDAC inhibitor can enhance both short- and long-term memory in mice. The company hopes to test the drug in the next year, says Michael Ahlijanian, vice president of research at EnVivo.

While scientists don't yet know exactly how epigenetic regulation affects memory, the theory is that certain triggers, such as exercise, visual stimulation, or drugs, unwind DNA, allowing expression of genes involved in neural plasticity. That increase in gene expression might trigger development of new neural connections and, in turn, strengthen the neural circuits that underlie memory formation. "Maybe our brains are using these epigenetic mechanisms to allow us to learn and remember things, or to provide sufficient plasticity to allow us to learn and adapt," says John Satterlee, program director of epigenetics at the National Institute on Drug Abuse, in Bethesda, MD.

"We have solid evidence that HDAC inhibitors massively promote growth of dendrites and increase synaptogenesis [the creation of connections between neurons]," says Tsai. The process may boost memory or allow mice to regain access to lost memories by rewiring or repairing damaged neural circuits. "We believe the memory trace is still there, but the animal cannot retrieve it due to damage to neural circuits," she adds.

The safety of more potent HDAC inhibitors, especially those that target the brain, is not yet clear. A paper published today in Neuron highlights potential problems. Tsai and her colleagues found that inhibiting a specific HDAC enzyme increased cell damage and death in rodents with symptoms of Alzheimer's disease, while increasing levels of the enzyme-protected neurons.

The findings suggest that scientists will need to develop compounds that act selectively on the different HDAC enzymes, perhaps inhibiting some and activating others. At this point, little is known about the specific functions of the nearly 20 different enzymes. But Tsai says that her group has identified one enzyme that appears to be specifically involved in memory. The researchers are also developing more selective compounds. "We hope to have something in the near future that we feel comfortable evaluating in people," says Tsai.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Nuclear fusion could prove an abundant source of clean energy. But the process can be difficult to control, and scientists have yet to demonstrate a fusion plant that produces more energy than it consumes. Now physicists at MIT have addressed one of the many technological challenges involved in harnessing nuclear fusion as a viable energy source. They've demonstrated that pulses of radio frequency waves can be used to propel and heat plasma inside a reactor.

MIT's doughnut-shaped fusion reactor, the Alcator C-Mod, uses magnets to confine hydrogen in a turbulent, electrically charged state of matter called a plasma. By infusing large amounts of energy into the plasma, physicists can kick off fusion reactions that, in turn, release large amounts of energy. The MIT reactor is too small to generate practical fusion reactions that generate enough energy to keep themselves going--what's called a burning plasma. But the researchers have been working on ways to achieve this state in larger reactors, such as the planned International Thermonuclear Experimental Reactor (ITER).

The challenge is keeping the plasma confined in a stable rotation, with just the right amount of turbulence and the ideal temperature gradients so that it keeps burning. Traditionally, physicists control plasmas by injecting high-power beams of inert atoms. Controlling turbulence and temperature is critical: the better confined the plasma, the smaller the reactor needs to be and the less power required.

When directed well, inert beams in today's reactors "have substantial momentum and drag the plasma with them," says Earl Marmar, head of MIT's Alcator project. They also "heat" the plasma, supplying energy to kick-start fusion reactions. Marmar anticipates that in the future, the beam technique simply won't work: it will be able to impart enough energy, but not enough momentum.

MIT researchers led by John Rice and Yijun Lin have experimentally demonstrated that radio waves--which will be able to penetrate large plasmas like ITER's--can give plasma both energy and momentum. The MIT group placed powerful antennas at the edge of the reactor to launch two frequencies of radio waves into the plasma. One group of waves is attuned to protons. When these waves collide with protons, they heat up; the protons, in turn, collide with the hydrogen isotope fuel. The second group of waves is attuned to lightweight helium isotopes that the MIT group adds to the mix. These waves collide with the helium, imparting their momentum to the isotopes, which push the rest of the plasma. These experiments were described last week in the journal Physical Review Letters.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

A group of scientists at Korea Advanced Institute of Science and Technology (KAIST) has fabricated a working computer chip that is almost completely clear -- the first of its kind. The new technology, called transparent resistive random access memory (TRRAM), is described in this week's issue of the journal Applied Physics Letters.
The new chip is similar in type to an existing technology known as complementary metal-oxide semiconductor (CMOS) memory -- common commercial chips that provide the data storage for USB flash drives and other devices. Like CMOS devices, the new chip provides "non-volatile" memory, meaning that it stores digital information without losing data when it is powered off. Unlike CMOS devices, however, the new TRRAM chip is almost completely clear.

Why is transparency important? Clear electronics may make your room or wall more spacious by allowing electronic devices to be consolidated and stacked in small clear spaces. The technology may also enable the development of clear computer monitors and televisions that are imbedded inside glass or transparent plastic. The Korean team is also developing a TRRAM using flexible materials.

"It is a new milestone of transparent electronic systems," says researcher Jung Won Seo, who is the first author on the paper. "By integrating TRRAM device with other transparent electronic components, we can create a total see-through embedded electronic system."

Technically, TRRAM device rely upon an existing technology known as resistive random access memory (RRAM), which is already in commercial development for future electronic data storage devices. RRAM is built using metal oxide materials, which are very transparent. What the Korean team did was to build a chip by sandwiching these metal oxide materials between equally transparent electrodes and substrates.

According to the Korean team, TRRAM devices are easy to fabricate and may be commercially available in just 3-4 years. Don't expect them to replace existing CMOS devices, however. Instead, Seo predicts, the new transparent devices will drive electronics in new directions.

"We are sure that TRRAM will become one of alternative devices to current CMOS-based flash memory in the near future after its reliability is proven and once any manufacturing issues are solved," says Professor Jae-Woo Park, who is Seo's co-advisor and co-author on the paper. He adds that the new devices have the potential to be manufactured cheaply because any transparent materials can be utilized as substrate and electrode. They also may not require incorporating rare elements such as Indium.

The article "Transparent resistive random access memory and its characteristics for nonvolatile resistive switching" by Jung Won Seo, Jae-Woo Park, Keong Su Lim, Ji-Hwan Yang and Sang Jung Kang was published on December 3, 2008 in Appl. Phys. Lett. (Volume 93, Issue 22). The article is available at

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Entanglement,” Andrew White tells, “is normally considered a non-negotiable part of quantum information processing. In fact, if you told me a couple of years ago that you could do quantum computing without entanglement, I would have been pretty skeptical – to say the least!”

Love personal electronics? Link up with the like minded at PEbuzz
White says that he first heard the idea of non-entanglement quantum computing from Carl Caves. “I was intrigued when Professor Caves, on sabbatical here in Australia from New Mexico, mentioned that there were sober predictions that entanglement wasn’t always necessary.”

White leads a team of young experimental scientists at the University of Queensland in Brisbane, Australia. Ben Lanyon, Marco Barbieri, Marcelo Almeida and White have been studying deterministic quantum computing with only one pure qubit (DQC1). “Entanglement is not the final story on what makes quantum information processing powerful,” White insists. The Australian team’s results can be found in Physical Review Letters: “Experimental Quantum Computing without Entanglement.”

“Normally, in order for quantum computing to work,” White explains, “we need to encode the information into quantum bits—qubits—which are in a noise-free pure state. It’s known that the entanglement between these is what makes standard quantum computing powerful.” He continues, “With a DQC1 scheme, you only have to have one pure qubit, and the rest can be noisy or mixed.” The idea behind quantum information processing using entanglement is that noiselessness has to be applied in order to provide a substantial advantage over classical computing. DQC1, though, could potentially offer a more efficient and less resource-intensive method of quantum computing, since entanglement would no longer be a necessity.

“For this demonstration,” White says, “we used the smallest possible example: a circuit with just two qubits, one pure and one mixed. We ran a phase-estimation algorithm as a small example, and found in every setting there was zero entanglement, but that most of the states couldn’t be described efficiently in a classical manner.”

White points out that this is suggestive that there are other possibilities, beyond entanglement, that contribute to the power provided by quantum information processing. “We’re still chewing through the implications,” he says.

“This is not a universal panacea,” White admits. “For some problems and algorithms you just need pure qubits and entanglement, problems such as Shor’s algorithm. However, there are applications and problems where the DQC1 method will work quite well, and will be more efficient than trying to get qubits that are all pure.”

With so many different architectures and schemes for quantum computing – all of them trying to create a system in which all the qubits are pure – it is rare to see a group looking to find applications for a quantum information system that makes allowances for impurity and the introduction of noise – insisting that entanglement is not necessary. “The fact is that certain classes of problems don’t need entanglement, and they don’t need all of the purity. In some cases, all that is needed is one pure qubit and the rest could be mixed. Really, with DQC1, you don’t have to work as hard as you think you do.”

We are starting to build more complicated algorithms to get an idea of where this could go. Regardless, the idea that entanglement may not be necessary for some types of quantum computing is big news.”

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Intel Labs researchers say they have achieved an important new step in developing silicon photonics, the ability to transmit data using light. A paper published in Nature Photonics details Intel researchers' efforts to develop a silicon-based avalanche photodetector, or APD, a sensor that detects and amplifies light signals. Intel says this type of avalanche photodetector offers greater bandwidth and amplification abilities than conventional photodetectors. At the same time, Intel is looking for less expensive ways to build these types of photodetectors.

Researchers with Intel Labs are publishing a paper that looks to bring the company closer to developing silicon photonics—the ability to transit data through light pulses—for viable commercial uses.

The research paper, which is being published in the Dec. 7 edition of the scientific journal Nature Photonics, describes efforts by Intel engineers to build a silicon-based APD (avalanche photodetector) at a lower cost and that gives better overall performance than conventional photodetectors made of different materials.

The goal of photonics is to replace more conventional electrical interconnects that use copper wiring with on-chip components that use light to transmit data. In a world where companies such as Intel, Advanced Micro Devices and IBM are increasing the number of processing cores that can fit within an CPU, photonics is seen as a way to allow those developments to continue while simultaneously speeding up the vital connections that move data in and out of processors that are close together or even between servers that are separated from each other.

In the past several years, Intel and IBM have been the two companies looking to develop new ways to bring photonics into the commercial market. At the Intel Developer Forum in August, Intel CTO Justin Rattner said he expects silicon photonics to enter the market as early as 2010. Rattner added that Intel wants to see the technology developed first for desktop PCs instead of the data center to show that silicon photonics is affordable and ready for mainstream use.

During a discussion of the latest research, Mario Paniccia, an Intel Fellow and director of the company's Photonics Technology Laboratory, echoed those same sentiments and said the APDs that his engineers helped develop would go a long way to reduce costs while increasing performance.

"With APDs, we have an opportunity to develop very high-speed optical links with the ability to drive faster and lower-cost technology in and around the platform," Paniccia said.

An avalanche photodetector is a sensor that both detects light pulses and amplifies them as light is directed into the silicon. Unlike other photodetectors that absorb one photon pulse and produce one electron, an APD can absorb and then amplify the light pulse and produce tens or hundreds of electrical signals.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

A MICROSCOPIC swimming machine that works like a paddle steamer could help deliver drugs inside the body and move chemicals around inside miniaturised labs. The device is the first artificial microswimmer to move without using chemical propulsion or bending itself into different shapes.

For microscale swimmers, the viscosity of water presents a much bigger barrier to motion than we are used to on everyday scales. It is like swimming through honey for a human: any forward movement during one half of a swimming stroke would be negated by an opposite backwards motion in the second half, with the result that the swimmer goes nowhere. "In a stiff fluid, what you achieve in half of your swimming cycle you undo in the next half-cycle," says Ramin Golestanian, a physicist at the University of Sheffield in the UK.

That's why bacteria like Escherichia coli use a rotating corkscrew-like tail called a flagellum to propel themselves forward. With a continuously rotating propeller rather than a backwards-forwards swimming motion, the bacteria barrel along.

Now Golestanian and Pietro Tierno at the University of Barcelona in Spain have been able to achieve a similar goal with a micromachine that swims by mimicking a paddle wheel. The researchers built their microswimmer from two beads, 1 and 3 micrometres in diameter. They coated the beads in a protein called streptavidin that binds strongly to DNA and then fastened them together with two 8-nanometre strands of DNA.

The beads are made of a magnetic material and so align themselves with any applied magnetic field. By rotating this magnetic field, the researchers set the beads spinning, and were delighted to find that the beads moved through water at about 1 micrometre per second. "I didn't expect to see real propulsion like [that seen] in bacteria, to tell the truth," says Tierno.

The movement occurs only when the micromachine is close to the bottom of a vessel. This is because there is a less mobile boundary layer that "sticks" to the bottom surface of the fluid container and so exerts a larger force on the rotating bead than the rest of the water (see diagram). This makes the whole thing move, just as a paddle wheel can propel a boat because water resists the paddles more than air does.

Golestanian says: "It's like a unicycle wheel with the smaller bead as the pedal making it go around - with the DNA as the pedal shaft."

The team believes its technology can easily be shrunk to the nanoscale - the level at which it would be useful as a drug carrier. "Microscale and nanoscale hydrodynamics are not all that different," says Golestanian. The boundary-layer properties that the device needs to swim should be present in small blood vessels.

Tierno says the swimming beads could also shuttle reagents from one part of a miniaturised "lab-on-a-chip" to another. John Illingworth, a biologist at the University of Leeds in the UK, is impressed. "What they've done is certainly tough to achieve," he says.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Entire industries and research fields are devoted to ensuring that, every year, computers continue getting faster. But this trend could begin to slow down as the components used in electronic circuits are shrunk to the size of just a few atoms. Researchers at HP Labs in Palo Alto, CA, are betting that a new fundamental electronic component--the memristor--will keep computer power increasing at this rate for years to come.

Memristors were first predicted in 1971 by Berkeley professor Leon Chua. They are nanoscale devices with unique properties: a variable resistance and the ability to remember the resistance even when the power is off.

After rediscovering Chua's work, researchers at HP Labs built the first working memristor in May of this year. And last week, at the first ever Memristor and Memristor Systems Symposium, in Berkeley, CA, the same team showed how memristors can be integrated into functioning circuits. Their circuits require fewer transistors, allowing more components (and more computing power) to be packed into the same physical space while also using less power to function.

"We're trying to give Moore's Law a boost," says lead researcher Stan Williams, a senior research fellow at HP, referring to a prediction made by Intel founder Gordon Moore that the number of transistors on a computer circuit (and therefore computer performance) should double roughly every two years.

Increasing performance has usually meant shrinking components so that more can be packed onto a circuit. But instead, Williams's team removes some transistors and replaces them with a smaller number of memristors. "We're not trying to crowd more transistors onto a chip or into a particular circuit," Williams says. "Hybrid memristor-transistor chips really have the promise for delivering a lot more performance."

A memristor acts a lot like a resistor but with one big difference: it can change resistance depending on the amount and direction of the voltage applied and can remember its resistance even when the voltage is turned off. These unusual properties make them interesting from both a scientific and an engineering point of view. A single memristor can perform the same logic functions as multiple transistors, making them a promising way to increase computer power. Memristors could also prove to be a faster, smaller, more energy-efficient alternative to flash storage.

Although memristor research is still in its infancy, HP Labs is working on a handful of practical memristor projects. And now Williams's team has demonstrated a working memristor-transistor hybrid chip.

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Cells in the retina of mice can be coaxed to create new neurons following an injury, according to new research from the University of Washington. This is the most definitive demonstration to date that such regeneration is possible, given the right cues, for a specific type of neuron in the inner retina of a mammal.

If researchers could spur the development of different types of new neurons in the living human eye, they might be able to replace cells that are lost in diseases like macular degeneration and retinitis pigmentosa. Few or no treatment options are currently available for patients with these diseases.

"This is an excellent, clear demonstration that you can regrow cells of the inner retina," says Stephen Rose, chief research officer at the nonprofit Foundation Fighting Blindness.

The retina, which is located in the back of the eye, has an outer layer of cells that detect light and translate it into electrical signals. It also has inner layers, which process the signals and send them to the brain.

In degenerative disorders like macular degeneration and retinitis pigmentosa, outer-layer cells, called photoreceptors, break down in the early stages of disease, leading to loss of vision. Extensive research has focused on replacing these cells, in an effort to restore sight. In people with advanced disease or blindness, however, the inner cell layers may also break down or become disorganized and need to be rebuilt, says Rose.

"The outer retina is like the CPU, and the inner retina is like the motherboard," he says. "If I attach a new CPU to a dead motherboard, it won't do any good, no matter how great a CPU it is."

In the current work, developmental biologist Thomas Reh and his team first damaged the mice's retinas, using a chemical known to destroy inner retinal cells. Then they injected a cocktail of proteins called growth factors. This process spurred some cells, called muller glia, to return to an immature state. Muller glia normally provide nutrition to other neurons and do not divide. Following chemical treatment, however, some of them returned to an undifferentiated state in which they resembled progenitor cells.

The immature cells then started to proliferate, some of them differentiating into mature neurons. In particular, they formed amacrine cells, which are located in the inner retina. These cells mediate electrical signals coming from the photoreceptors and are particularly important to motion detection and night vision, says Reh.

"We did not get a large number of new neurons," he adds. "But we showed that we could make new amacrine cells, the cell type that had been lost to damage." The findings were published this week in the online edition of the Proceedings of the National Academy of Sciences.

Franklin W. Dixon
Aug 7, 2004

by DocEvil

ban oocc for spamming tia

Democrat Death Tax
Jan 19, 2008

an appropriately boring and predictable response from oocc

Jun 4, 2005

by Fragmaster

no one is gonna read a single word of that

Sep 9, 2008

slow ya roll nigguh

probably the worst method of rolling with the punches ever

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Since the 1980s, researchers have used lasers to stop molecular vibrations, so that the molecules can be observed in their natural environment. Now researchers at Yale University have used the same kind of nanoscale optical force to control an integrated circuit. Their device could form the basis of fast, low-power optical chips, just as transistors are the building blocks of today's electronic circuits. The new device, a light-driven nanoresonator, could also be used as an extremely sensitive chemical detector. The work is a major landmark in uniting mechanical and optical forces at the nanoscale.

Chips that use light instead of electrons to carry data should be faster and consume less power than traditional integrated circuits. But so far even the fastest optical chips have incorporated electrical elements called modulators. These modulators encode light with data by converting the signal from light into electrons and back again. This extra step makes optical chips complex and drains power. A circuit developed by Yale researchers led by electrical-engineering professor Hong Tang incorporates a modulator that's driven by light, not electrons.

The Yale group began its work by creating a silicon optical chip. To make the modulator, they etched a small portion of the waveguide, the thin silicon road along which the photons travel, into a 500-nanometer-wide bar. This silicon beam, which is suspended from the chip's surface so that it can flex, has two functions. It both carries the optical signal and modulates it. Tang and his colleagues sent a light signal through the integrated circuit, then shone laser light onto the nano-optical modulator, causing it to oscillate up and down. These oscillations modulate the speed of the light traveling through the beam.

The Yale team is the first to demonstrate the existence of this optical force on an integrated circuit--and the first to exploit it to make a working device. "The light force can be put to real use," says Tang. His group has also demonstrated that it can make arrays of hundreds of working resonators on a single chip.

Optical tweezers have been very useful for manipulating free-floating nanoscale objects in solution, but they're very complex, requiring a high-power laser and an entire benchtop. Although it still requires input from a laser that isn't yet integrated on the chip, the Yale setup is simpler than that required for optical tweezers.

Described in the journal Nature, the Yale circuit "represents a technical breakthrough," says Columbia University mechanical-engineering professor James Hone. "It opens up a new way to make opto-mechanical switches that can reroute one optical signal using another." Hone says that such devices could be the building blocks of optical circuits. Adam Cohen, a professor of chemistry, chemical biology, and physics at Harvard, agrees--as long as making these devices proves compatible with standard semiconductor processing. The traditional approach, which involves converting the optical signal into an electrical one and back again, "slows things down and is more complicated," Cohen says.

Because the mechanical oscillation of the beam changes the way that light flows through it in a measurable way, the beams could be developed into very sensitive chemical sensors, says Hone. The Yale group has not demonstrated a chemical sensor. In theory, however, arrays of the on-chip silicon oscillators could be decorated with antibodies that bind blood proteins characteristic of diseases such as cancer. If a blood sample placed on the chip contained a small amount of the protein, it would bind to the silicon beam, changing the frequency of its oscillations--and thereby causing a measurable change in the speed of light carried through it. Other nanoscale sensors work on a similar principle, picking up differences in the flow of electrical current through oscillating silicon beams or carbon nanotubes when they bind to molecules of interest. Optical resonators might be even more sensitive, says Hone, because optical devices are "better behaved," giving clearer signals than electrical devices do.

However, such applications are many years away. The device is still in very early development in Tang's lab, where his group is refining its mechanical properties.

Democrat Death Tax
Jan 19, 2008

are you done yet you boring retard

May 22, 2005

And the Lord said, Behold, the people are one, and all have one language: and now nothing will be restrained from them, which they have imagined to do.

Democrat Death Tax posted:

are you done yet you boring retard

no I got like a thousand of these, they are pretty cool read some:

mplants that deliver a drug to just the right place in the body could become "biobatteries" that release the drug at exactly the right rate.

At present, it is difficult to control how quickly implants release their payload. The biobattery produces a current of a known strength, and it is this that controls the drug's release.

The smart implant is based on magnesium alloy stents that are being developed for surgeons to use as temporary splints to keep damaged blood vessels in shape while they heal. Magnesium is used because it will corrode away inside the body safely when the stent's job is done.

A team led by Gordon Wallace of the University of Wollongong in New South Wales, Australia, made use of this to make a biobattery from a magnesium alloy anode and a conducting polymer cathode that carried an anti-inflammatory drug. They immersed the device in an electrolyte to simulate the body fluids around a real implant.

As the magnesium oxidised and the polymer reduced, a current was generated in the device that reversed the electrostatic charges holding the drug molecules to the polymer.

To fine-tune the rate of drug delivery, the team coated the magnesium alloy with a biodegradable polymer that slowed its corrosion. The drug release rate is engineered into the device's structure, Wallace told the Medical Bionics meeting in Lorne near Melbourne last week. The devices could be used in any implant that corrodes, such as titanium hip joints, which form titanium oxide on their surface.


The Welfare Queen
Dec 11, 2008

by Ozma

Owlofcreamcheese posted:

no I got like a thousand of these, they are pretty cool read some:


  • Post
  • Reply
«18 »