|
What is Python? (from http://www.python.org) Python is a dynamic object-oriented programming language that can be used for many kinds of software development. It offers strong support for integration with other languages and tools, comes with extensive standard libraries, and can be learned in a few days. Many Python programmers report substantial productivity gains and feel the language encourages the development of higher quality, more maintainable code. Official Website: http://www.python.org/ Official Documentation: http://docs.python.org/ Python Quick references: http://rgruet.free.fr/ Semi-Official Python FAQ: http://effbot.org/pyfaq/ Official Python Cheeseshop: http://pypi.python.org/pypi Official Python Wiki: http://wiki.python.org/moin/ Python development tools, including editors: http://wiki.python.org/moin/DevelopmentTools Official Planet Python (weblogs): http://planet.python.org/ Unofficial Planet Python (more weblogs): http://www.planetpython.org/ Python Magazine: http://pythonmagazine.com/ Norvig's Infrequently asked questions: http://norvig.com/python-iaq.html - This man exemplifies elegant code. His solutions are just... beautiful. Python tutorials and free online books:
What version should I use? Python 2.x or 3.x? http://wiki.python.org/moin/Python2orPython3 - The offical website tells all Alternative Python interpreter implementations
Python: Myths about Indentations: http://www.secnetix.de/~olli/Python/block_indentation.hawk Python Editors and IDEs: http://wiki.python.org/moin/PythonEditors (Although you should use VIM http://sontek.net/turning-vim-into-a-modern-python-ide) More about Python web frameworks than you could ever want to know: http://wiki.python.org/moin/WebFrameworks - though, you should just stick to Django, or Flask. Seriously! PyCon US! PyCon US is the largest annual gathering for the community using and developing the open-source Python programming language. PyCon is organized by the Python community for the community. We try to keep registration far cheaper than most comparable technology conferences, to keep PyCon accessible to the widest group possible. http://us.pycon.org/2012/ There are many PyCons - or Python Conferences - throughout the world. You can find one, should the US one not be your cup of tea at: http://www.pycon.org/ m0nk3yz posted:Why I like python: I enjoy both it's syntax, dynamic (latent) typing, standard library and truly enjoy programming in the language. I've been hacking in python for about (edit) 7 or 8 years at this point, and I have come to really enjoy the community and ecosystem around the language as well. Python is expressive and while I've run into bits of python code that make me WTF, I've rarely run into a chunk I couldn't read, or understand - you have to go out of your way (or be abusive with say, list comprehensions) to make python unreadable. And just to pimp my own "Good to great Python reads": http://jessenoller.com/good-to-great-python-reads/ TasteMyHouse posted:Since python is often recommended as a beginner's language (especially here in CoC) it'd be nice for there to be a short discussion / link to a discussion about interpreted vs compiled in general, the distinction between CPython and Python itself, etc. I know that stuff kind of confused me when I was looking at python and didn't know poo poo from poo poo. http://en.wikipedia.org/wiki/Interpreted_language http://en.wikipedia.org/wiki/Python_(programming_language) m0nk3yz fucked around with this message at 04:26 on Aug 27, 2011 |
# ¿ Nov 4, 2007 19:10 |
|
|
# ¿ Apr 26, 2024 04:12 |
|
Bonus posted:Yeah I agree, Python is by far my favourite language. It's concise yet readable. It's just a joy to work in, I love how the modules work, the duck typing is great, all in all I really like it. Well, everything except the GIL in CPython, which basically prevents Python from running on multiple cores concurrently. However, there's a great module called Parallel Python which lets you do just that. That's only partially true: Python can use multiple cores, python threads can not (sort of, unless they're caught in a c module that's released the gil or blocking I/O), as shown by effbot's recent wide-finder experiment (http://effbot.org/zone/wide-finder.htm). If you fork processes you can easily make use of multiple cores. Of course, fork/exec means you loose the shared context of pthreads. I personally prefer the Processing (http://pypi.python.org/pypi/processing/) module over the parallel python module, it's thread-API compatible which at least for most of my applications means it is a drop-in replacement. As for the GIL, yes, the cPython interpreter has a GIL which is primarily a Python Dev feature, however there are talks and projects to add something like Parallel Python/Processing.py to the stdlib (I'm considering doing a PEP for Processing) and adding a patch set to Python 3000 that allows for free-threading. More on the GIL: GvR: http://www.artima.com/weblogs/viewpost.jsp?thread=214235 Brett Cannon: http://sayspy.blogspot.com/2007/11/idea-for-process-concurrency.html Bruce Eckel: http://www.artima.com/weblogs/viewpost.jsp?thread=214303 m0nk3yz fucked around with this message at 01:45 on Nov 5, 2007 |
# ¿ Nov 4, 2007 19:58 |
|
N.Z.'s Champion posted:After hating on the XML processing in Python (4suite is sluggish and buggy) I found LXML and P4X. So use them unless you want to break your brain. Which version of Python were you using? For my XML needs I've loved elementtree (and it's brother, cElementtree). Elementtree is now in the stdlib as of 2.5.
|
# ¿ Nov 5, 2007 00:11 |
|
Just to share, here's some good reading on Python Metaclasses: http://markshroyer.com/blog/2007/11/09/tilting-at-metaclass-windmills/ http://www.ibm.com/developerworks/linux/library/l-pymeta.html http://en.wikibooks.org/wiki/Programming:Python_MetaClasses
|
# ¿ Nov 12, 2007 12:32 |
|
deimos posted:http://effbot.org/librarybook/ It was/is added
|
# ¿ Nov 12, 2007 15:31 |
|
deimos posted:I am blind, sorry. Nice, I totally missed that one. I am quite happy with what's coming down the pipe for py3k
|
# ¿ Nov 13, 2007 01:09 |
|
duck monster posted:Ah. Nice. Much more elegant than the get_foo set_foo stuff. Speaking of decorators, this popped up on pypi this morning - simple threaded/threadpool decorators: http://pypi.python.org/pypi/threadec/
|
# ¿ Nov 13, 2007 12:55 |
|
Hammertime posted:I'm relatively python ignorant, I've only been learning it for a week or so ... I actually just got done writing a fairly lengthy article talking about this (the gil+threads, etc). Here's the short version. Python, in the cPython interpreter uses OS-level threads (pthreads for unix people) and due to this, is perfectly capable of leveraging multiple cores. However, this is blocked by the GIL, which prevents any single thread from being executed within the interpreter at once, which make the interpreter (and making extensions) simple as you don't have to manage threads/memory - the core interpreter and the GIL will do that for you. The GIL is there to protect the interpreter's memory - not to hinder users or programmers. Ultimately, it is a cPython interpreter developer feature, not something which benefits end users (unless you are writing c-extensions). Which neatly deltas to the next point - any python code which is threaded, and only calls python code - i.e: number crunching - will run as fast (slightly slower) than the same code written single-threaded. However, if the code you are writing uses any c-code (and this includes built-in modules written in c) which is thread-safe and uses the PyGILState_STATE/Py_BEGIN_ALLOW_THREADS macros your code will spread across processors and run faster than the single threaded implementation. For example, 2 threads running a fibonacci calculation will run slightly slower than running the same calculation single threaded - however 2 threads performing disk I/O or socket calls will run much faster than the single threaded example. That all being said - the GIL is a feature for the interpreter developers, and it's been stated by many people that removing the GIL will make life quite difficult for a lot of contributors and users. There has been work to remove it and I know of at least two people making a patch set for python 3000 to remove the GIL but keep the ease of use. There are however, other options than to use the basic threading module and adding a time.sleep(.00001) (that forces the GIL to be released) and using I/O. There's actually many modules that not only mimic the threading API, but allow applications to distribute load across processors and machines. Here's a "short list" of modules: http://code.google.com/p/python-distributed/wiki/PythonPackagesOfNote Check out the rest of the wiki for more information, of note is a quick run down of python GIL related articles: http://code.google.com/p/python-distributed/wiki/PythonGILRelatedArticles I tend to prefer the Processing module (http://pypi.python.org/pypi/processing/) given I can drop it in and quickly replace threading when I've run into scaling problems.
|
# ¿ Nov 14, 2007 02:02 |
|
fake posted:I still don't really understand why the GIL helps interpreter developers. They can't write threadsafe code? No, it means they don't have to worry about it being thread safe - no worries about memory/data corruption, deadlock issues, it keeps garbage collection simple and out of the way, etc. Yes, they can write thread safe code, but as people much smarter than I (Goetz and Eckel and others) threading is difficult to get dead-right and can lead to madness, especially when figuring out a testing matrix. Some could also argue that due to the simplicity of the C interface to python, there has been a much higher rate of third-party module developed for python. Even third party modules that bypass the GIL for concurrency.
|
# ¿ Nov 14, 2007 03:17 |
|
bitprophet posted:Look, between that whitespace bullshit and this inability to handle threading natively, I don't see how anyone can consider Python anything but a toy language Dude, gently caress whitespace. http://timhatch.com/projects/pybraces/ Also, I pre-ordered the django book tonight.
|
# ¿ Nov 14, 2007 04:10 |
|
tef posted:Docstings and pydoc. Also note EpyDoc - it's above and beyond pydoc http://epydoc.sourceforge.net/
|
# ¿ Nov 16, 2007 21:33 |
|
Hammertime posted:Edit2: Well, it removes much of python's threading overhead. Doesn't solve the GIL problem though, still bound to a single core. Neat idea, seems a big waste of effort though, unless I've missed something. You're correct, stackless just removes the stack: Not the Global Interpreter Lock. As has been pointed out previously, if you're trying to sidestep the GIL, look at the module I (and others) have linked too. If you're worried about the GIL + Web applications ala Django: Don't just yet. When Django Apps are served up via mod_python within Apache, the model is wildly different (given apache does use multiple cpus/cores). Most of the time in web apps you're also i/o (not cpu) bound. m0nk3yz fucked around with this message at 13:18 on Nov 19, 2007 |
# ¿ Nov 19, 2007 12:56 |
|
FuraxVZ posted:I'm looking to get into Python a bit; since I learn best from paper books, any recommendations? And yes, yes, I know, you learn by doing not reading, but I like books for grokking languages. Something with some good depth and the spirit of the language would be great. The latest edition of Core Python Programming is really good. Covers 2.5 to boot.
|
# ¿ Nov 25, 2007 01:27 |
|
I use textmate and Eclipse+Pydev. Textmate when I'm not working on our huge work-projects in house (autocomplete is a must on large projects).
|
# ¿ Nov 28, 2007 17:19 |
|
If you want an excellent newforms walkthrough, check out the intro to newforms posts on James Bennett's blog: http://www.b-list.org/weblog/2007/nov/22/newforms/ http://www.b-list.org/weblog/2007/nov/23/newforms/ http://www.b-list.org/weblog/2007/nov/25/newforms/
|
# ¿ Nov 29, 2007 12:09 |
|
No Safe Word posted:Dangit, he asked though. He should have read the thread!
|
# ¿ Nov 29, 2007 22:50 |
|
Casao posted:I figure it's worth pointing out that PyObjC, which allows you to program native Coacoa apps for OSX ,now comes standard with Leopard and XCode, making Mac Python programming a bigger joy. One quick note, if you are running 2.5.1 on Leopard and want to new pyobjc stuff from subversion to compile, you have to edit Python.framework/Versions/2.5/lib/python2.5/config/Makefile and remove the "-isysroot /Developer/SDKs/MacOSX10.4u.sdk" chunk from it. Otherwise, the new pobjc stuff won't compile. code:
|
# ¿ Dec 13, 2007 23:35 |
|
FYI, Pycon registration is now open! http://us.pycon.org/2008/about/ I'll more than likely be going, but skipping the Tutorials day.
|
# ¿ Jan 21, 2008 04:08 |
|
Here's an interesting post from Guido flying around the 'tubes: http://mail.python.org/pipermail/python-dev/2008-January/076194.html I personally like it as a "hey that's neat for testing" idea - mock objects not withstanding. I like the followup from Robert Brewer where he saves an old copy of the method aside: http://mail.python.org/pipermail/python-dev/2008-January/076198.html
|
# ¿ Jan 30, 2008 14:08 |
|
w_hat posted:From one error to another: Interesting you should say this now - Someone just posted an excellent Python reading list here: http://www.wordaligned.org/articles/essential-python-reading-list
|
# ¿ Feb 2, 2008 03:35 |
|
porkface posted:You guys know you can safely run multiple versions of Python on the same machine? You just have to set them up right, and choose which version you want to be your generic system-wide install. This man speaks truth: I have several versions installed on my mac. I just twiddle my bash_profile to point to the one I want.
|
# ¿ Feb 14, 2008 04:17 |
|
Chutwig Regarding smtplib - did you look at this thread on c.l.p? http://groups.google.com/group/comp...a1de3ed29133546
|
# ¿ Apr 9, 2008 20:49 |
|
Figured I'd ask - any Boston/Boston Metro area Python people (or people getting into Python) looking around for a Job()?
|
# ¿ Apr 16, 2008 02:29 |
|
deimos posted:Things to add to the op: Added
|
# ¿ May 21, 2008 12:52 |
|
Scaevolus posted:I thought the Global Interpreter Lock made this form of threading pointless? Or are you doing multiple processes? You can use parallel python (http://www.parallelpython.com/) or pyprocessing (http://pyprocessing.berlios.de/) to easily side-step the GIL. My particular favorite is the latter (pyprocessing). In fact I am working on a pep and with python-dev to see if I can get pyprocessing into the stdlib for 2.6. The pyprocessing module is a drop-in replacement for the threading module (it's API compliant)
|
# ¿ May 23, 2008 11:50 |
|
The processing PEP is now live (draft form): http://www.python.org/dev/peps/pep-0371/
|
# ¿ May 28, 2008 19:34 |
|
deimos posted:I just realized I've been reading m0nk3yz's blog for a while if that's his PEP. It is, and I hope it's been a decent read, although I've been heads down with the new job/PEP/python magazine work lately.
|
# ¿ May 29, 2008 13:51 |
|
And because tuples are immutable, they can be used as dictionary keys!
|
# ¿ May 30, 2008 12:31 |
|
tripwire posted:This is a really cool post; I have a somewhat dumb followup question that you or someone else might be able to answer for me. I think the problem right now for PyPy is turning itself from little more than a science project into something more concrete. Right now they're still fiddling around with a lot of stuff (see: http://morepypy.blogspot.com/) and either they're lacking resources or focus. Of course, maybe I'm too focused on shipping stuff. As for how well it would mesh with PP and PyProcessing: We'll see. It looks like my Pep (371) is getting accepted, and some of the ramifications of that may be some work on my part (or others) to port the module to jython/ironpython/pypy - but given the lack of GIL on the first two, it may not need to be ported.
|
# ¿ Jun 3, 2008 14:21 |
|
PEP 371 has been accepted/final!
|
# ¿ Jun 6, 2008 13:02 |
|
Scaevolus posted:Python 2.6b1 was released, changelog here Yeah, just note I boned the patch for the multiprocessing module and forgot to add it a portion to the makefile see http://bugs.python.org/issue3150 rant: The fact I have to add a chunk to setup.py in trunk *and* edit a makefile *and* twiddle some of the document indexes to add a single package sucks.
|
# ¿ Jun 20, 2008 12:23 |
|
tripwire posted:m0nk3yz, I know you worked on pyprocessing so perhpas you can answer a stupid question. I'm trying to parallelize a serial python program; in the program there is a list of "chromosome" objects which are supposed to get a unique id when they are instanced. The relevant code for it looks like this: You have a couple of ways of doing this - you have already touched on one which is to change __get_new_id to generate pseudo-random integers for each of the objects and basically making it a shared objects with locks/etc (see processing.RLock/etc). In my code, I generate empty objects with unique IDs ahead of time (think "BlankObject.id") and pump a good amount of them into a shared queue which I then pass into the processes (processing.Queue). The bad thing about my approach is that you could run out of blank objects, so I have to keep a producer in the background pumping in new objects so the workers generating the objects always have new blanks. In your case it does make sense to make a new shared object which essentially generates the unique IDs. In my case, I could also just fill a queue with unique objects - or subclass processing.Queue and overwrite the get() method to generate batches of IDs if the queue is empty. A simpler approach is to pick a seed and pass it to the child processes so they can in turn pass it to a random call - you have a pretty low chance of collisions with random especially if in your ID you include some other attribute of the object. In another implementation, each object I spawned used a random number (generated from a seed) + 2-3 other attributes of the object being created. A few things to think about : When using processing.Queue, you pay a serialization and deserialization cost for things coming in and out of the Queue. The same goes for the cost of lock acquisition and releasing, it really depends on where you want to take the hit. If you go with the "shared object generating the ids" approach - that object is going to have to keep an ever-growing list of IDs it's handed out so that it really does ensure that there isn't a conflict. Random thought: use a seed passed to random and the machine-time (time.time) to generate the IDs m0nk3yz fucked around with this message at 14:14 on Jun 24, 2008 |
# ¿ Jun 24, 2008 14:10 |
|
Bozart posted:Why not just use the incremented IDs, and have each process remember its process ID. If you want to combine the results, you can create unique IDs then from the PID and innovation number. Also, I am not sure if NEAT would work if the innovation numbers were not strictly increasing, but I could be wrong. I would mainly just be way too lazy to go through the code and see what I would have to change to use random IDs instead of incremented ones, but to each their own. Good idea - also, each process and thread inside of python does support get name / get id calls - you can even name them anything you want (say, a unique seed for each one which allows you to make unique names for each process namespace)
|
# ¿ Jun 24, 2008 16:19 |
|
Ok, so a question for all of you. I'm moving forward with a project to design a ton of function tests in Nose - the problem being, each test needs to be "told" about the attributes of the system under test (ip addresses, etc). I can do a few things:
|
# ¿ Jul 7, 2008 21:47 |
|
For those of you using nose, I uploaded a new configuration data plugin yesterday: http://pypi.python.org/pypi/nose-testconfig/ It's a quick and dirty method of passing configuration data down to tests from within the context of nose.
|
# ¿ Jul 29, 2008 12:35 |
|
Yes, right now (provided I stop breaking the build) the 2.6 and 3.0 releases are "on track" Download and test the latest betas people!
|
# ¿ Aug 22, 2008 20:51 |
|
bitprophet posted:Not a bad idea (as long as you use the built-in "runserver" -- unless you're already a web dev, don't get bogged down with installing Apache or anything), doing the Django tutorial and then making a simple site will expose you to a lot of real-world Python very quickly Apache deployment may cause your hair to turn grey or fall out. I'm almost positive my Dr. needs to put me on blood pressure meds after I fought with it on OS/X.
|
# ¿ Sep 28, 2008 23:01 |
|
duck monster posted:Except that literally every unix on the market except for the mac has a coherent policy on package management. I love the mac, but its package management has more in common with loving slackware than anything modern. Download tarball, compile and pray the gods will be benevolant. And no, easy_install will usually light on fire the minute its got to compile something and it becomes apparent that Apple puts things in different places to bsd's or SysV's. Reminder there are about 5 distros of python out there for the mac, none of which are compatible. Uh, my mac works great for me for installing python things - it's all I develop on. Also, if you're on leopard: You should not be installing any other python distribution other than the one it came with, unless it's to your home directory or some /opt location. Thou shalt not change the system python! And just to add - virtualenv works great too for keeping things in local sandboxes and out of the main framework directories. m0nk3yz fucked around with this message at 21:08 on Sep 29, 2008 |
# ¿ Sep 29, 2008 21:04 |
|
outlier posted:That's not true - there's no problem with compiling a framework build for OSX. I routinely maintain 3 different pythons on my Mac (2.3 for backwards, 2.4 for zope, 2.5 for work). And once you factor eggs in, installing packages is a snap. So your original point is true - python works great on the mac. Let me correct myself: I don't compile additional framework builds unless I need to - which is almost never, simply due to the fact I can install a perfectly functional build in my home directory and keep it completely self-contained. As for compiling packages - this is true pretty much everywhere. People get pretty hung up on having packages with pre-compiled everything, but what happens if you need version 2 and your upstream only provides version 1 (which happens to me with ubuntu all the time) - I'm going to need to compile the darn thing. I guess I just don't see what the big deal about compiling this stuff.
|
# ¿ Sep 30, 2008 14:04 |
|
|
# ¿ Apr 26, 2024 04:12 |
|
Bozart posted:This, 1000 times. IANAWU (I am not a windows user) - but during the 2.6 release process, I had some multiprocessing bugs that came up on windows I needed to break the vmware out for. I can not even begin to image the pain of regular windows users when it comes to the windows build process. I eventually gave up, winged the patch and deleted the VM. Also, gently caress mingw.
|
# ¿ Sep 30, 2008 19:21 |