From holger at merlinux.eu Sun Apr 1 04:21:59 2012 From: holger at merlinux.eu (holger krekel) Date: Sun, 1 Apr 2012 02:21:59 +0000 Subject: [py-dev] Performance tests with py.test In-Reply-To: References: Message-ID: <20120401022159.GZ26028@merlinux.eu> Hi Bogdan, On Sun, Apr 01, 2012 at 02:16 +1100, Bogdan Opanchuk wrote: > I am trying to wrap performance tests for my module into py.test. Each > testcase is, basically, a data preparation stage (can take some time) > and the test itself. Is there some way (hooks, plugins) to make > py.test print the value returned from the testcase (and make testcases > time whatever they want)? What I want is just a table like the one > after "py.test -v", but with return values instead of "PASSED". I > googled it and haven't found any solutions. I don't know of a direct available example solution. > At the moment I am doing it as following: > 1. Overload "pytest_pyfunc_call()" and save returned result into > "pyfuncitem" parameter. > 2. Overload "pytest_runtest_makereport()", look for "perf" mark and > somehow output "item.result". > 3. I haven't decided about the way to output it yet, because I am > currently trying to find the place where "PASSED" is printed. do you want to consider performance regressions as a failure? > But this seems quite clunky and overcomplicated. Is there any better > way to do this? If not for the data preparation, the currently > existing time measuring system would probably be enough for me > (although I'd still miss the ability to print GFLOPS value instead of > seconds), but the data preparation is testcase-specific and I do not > want to move it to "pytest_generate_tests()" hook. > > Thank you in advance. Could you maybe provide a simple example test file and make up some example output that you'd like to see? Meanwhile, if you haven't already you might want to look at the output of "py.test --durations=10" and see about its implementation (mostly contained in _pytest/runner.py, grep for 'duration'). best, holger From holger at merlinux.eu Sun Apr 1 04:30:26 2012 From: holger at merlinux.eu (holger krekel) Date: Sun, 1 Apr 2012 02:30:26 +0000 Subject: [py-dev] OpenStack In-Reply-To: <05BB196AB3DA6C4BBE11AB6C957581FE449EF695@sfo-exch-01.dolby.net> References: <05BB196AB3DA6C4BBE11AB6C957581FE42A3A082@sfo-exch-01.dolby.net> <20120324134157.GM26028@merlinux.eu> <05BB196AB3DA6C4BBE11AB6C957581FE449EF695@sfo-exch-01.dolby.net> Message-ID: <20120401023026.GA26028@merlinux.eu> Hi Laurent, (sorry for the previous misnomer!) I had a bit of a hard time reading your mail because you didn't use the typical reply formatting. Actually i only know discovered the content, at first i completely skipped the content because i thought it was just a forwarded mail ... others may have done the same. If it's not too much effort maybe repost as a real reply? best, holger On Tue, Mar 27, 2012 at 09:23 -0700, Brack, Laurent P. wrote: > Hi Holger, > > No problem for the delayed response. I know you were in the bay area a > while back and may have felt everyone is always in a rush but this is > not our case :) > > -----Original Message----- > From: holger krekel [mailto:holger at merlinux.eu] > Sent: Saturday, March 24, 2012 6:42 AM > To: Brack, Laurent P. > Cc: py-dev at codespeak.net > Subject: Re: [py-dev] OpenStack > > Hi Brack, > [Laurent] >> This is my last name :) > > sorry for taking a while. Am still on travel, currently in the Sonoran > deserts. My answers up until first half May are thus likely to lag. > (They do have suprisingly good internet connection ... better than in > the three-times more expensive hotel in the San Francisco valley last > week). > > On Tue, Mar 20, 2012 at 10:27 -0700, Brack, Laurent P. wrote: > > Forking from the [py-dev] pytest-timeout 0.2 e-mail thread. > > > > > I am interested in OpenStack but you can you detail a bit more of > > > what you want to achieve? > > > > I have attached a rough diagram of what we are building internally > (hopefully it will not be filtered out). > > thanks for sharing. > > > About a year ago, we attended a presentation on OpenStack (when it was > still driven by Anso Labs before they got acquired by Rackspace). > > We were (and are) currently using a private cloud (VMWare) but > > contemplated the idea of scaling up to public clouds. Problem we had > > was that we had to develop custom code for each cloud vendor whether > Amazon EC2, Penguin, VMWare, etc. so OpenStack was really a viable path > to not lock ourselves in with a given vendor. > > sounds sensible. > > > While a bulk of our tests require embedded devices (in which case > > virtualization makes little sense) a good chunk can be run on standard > workstations (all OS flavors) and in this case it only makes sense to > move to a virtual environment. > > > > PyTest combined with xdist was the dream come true as we could focus > > on developing tests meant to run on a single machine and later > seamlessly parallelize their execution. There is nothing new here. > > > > We were thinking of writing a plugin to xdist (cxdist - c for cloud) > > that would interface with OpenStack, using the pytest_xdist_setupnodes > and pytest_configure_node hooks. > > ok. > > > In those hooks, the plugin would provision the machine (via OpenStack) > > > and then make xdist believe that it is dealing with physical slaves. > > > Finally at the end of the run, we would teardown the slaves, leaving > resources for other tests to run. > > > > We have not started yet on this as scalability is not an issue but > > this is our plan. As the diagram shows, the read boxes are plugins we > intend to release to the open source community. > > Is your general idea to use the open stack integration to get > parallelizable test runs with totally controllable environments? > > [Laurent] >> It is. We have a fairly large VMWare infrastructure but I > feel we are locked in. Also there is a point where scaling up this > private cloud will simply make no sense from an economic standpoint. > Finally, infrastructure like VMWare are of little use for the open > source community. > > > Our teams write a lot of "data driven tests" (testing various AV > > codecs with different configuration parameters using very similar > verification procedure) and as a result we make heavy use of funcargs. > > Curious, are you using 2.2.3 for this? There are some plans to further > improve parametrization where i would be interested in your feedback. > > [Laurent] >> Yes. As a matter of fact we had to make changes to our > plugin to work properly with the new metafunc.parametrize method. Maybe > here is the time to open a small parenthesis. We have simplified the > generation process > For our common users, although this doesn't prevent them from using > custom hook implementations or factories. One thing that the plugin does > is attach "meta data" to items (which is carried over from hook to > hook). This meta data contains information about test cases on the > testlink Server. One of the challenge with metafunc was to find a way to > attach this information in a way it would not be lost during the test > generation done by pytest. In 2.1.3, with "addcall" we had done this by > hacking the "param" formal parameter intended use (while preserving the > functionality). In 2.2.3 (the addcall hack was broken - meta data got > lost), we found another way. Again a hack and therefore no guaranty it > will work with future versions. It would be nice to have a supported way > to "attach" data as part of the parametrize process which is then > carried over to the "item" being generated. The idea is to carry over > information that may be generated by a hook to another hook (whenever it > makes sense). > > > The pytest_testlink plugin is very similar to the pytest Case > Conductor plugin from the Mozilla folks > (http://blargon7.com/2012/01/case-conductor-pytest-plugin-proposal/) but > interacts with TestLink (teamst.org in which we are currently involved > in - improving performance of web services etc.) as opposed to Case > Conductor. > > Is pytest-testlink a plugin that you plan to work on? > [Laurent] >> Actually we have it working. We have been using it for > about 6 months now and I just made a new internal release (to fix the > issue with 2.1.3+) which also supports "platforms" (a feature of > TestLink) and filtering based on TestLink constructs (outcome, test case > ID, keywords). The plugin relies on another package (called pytl) which > provides an OO model of the testlink server built on top of their > existing web services. Those web services are quite buggy and incomplete > but we have built-in workarounds. In parallel we are working with the > TestLink team to provide a more complete set of web services. We are > also working on restructuring the code base allowing better integration > via plugins (requirement managements, reporting - we have a solution > using BIRT) as well performance improvements, test case libraries (for > re-use between projects), project groups, etc. While the pytest testlink > plugin is extensively documented and so far proven to be pretty robust, > I want to clean it up even more before releasing it (making it available > as an open source package was the intent from the start). > > > However in addition to linking a test case ID to a given python test, > > we have implemented a simple mechanism to generate test cases from > > what we called Test Description Records. Those are currently stored in > > > excel spreadsheets but will be store in TestLink as test case > properties. The object injection is done automatically by the plugin > using a custom marker indicating which function argument shall be used > for TDR injection as well as correlating TDR with parametrized test > functions. > > right. Driving things from a spreadsheet, maybe even in Google docs, > sounds good to me, btw :) > [Laurent] >> We considered this but decided otherwise just because of > the limited functionality offered compared to M$ Office or LibreOffice > (we are actually using xlrd/xlwr). As a matter of fact I wanted to find > a way to abstract the "data source" for the TDR. Google doc would be a > perfect exercise. > > > The point is that by using the funcarg feature offered by pytest, we > > can generate tremendous amount of test cases with little effort and > this will require scalability and use of virtualization over time. > > > > Sorry for the long description, but I wanted to give a complete > overview to help comprehend where we are going. > > Our plan is to eventually release all this to the open source > community (when we think the plugins are mature and robust enough). > > > > If other people have a need/interest for those plugins (some of them > > are not illustrated), I would be more than happy to accelerate this > > process (getting approval from our Open Source Board). Since > pytest_cxdist doesn't exist, this could be open source from the get go. > > I wonder if the provisioning of VMs wouldn't better fit at the level of > "tox", see http://tox.testrun.org or even yet a level higher. tox > creates virtual environments on the Python level. FYI there are plans > to introduce a plugin system to tox and to make pytest-xdist make use of > tox configuration directly. > [Laurent] >> I need to educate myself more on Tox. We are using tox for > our CI system (Jenkins) but environment setup is actually done via > buildout (tox invoking buildout - maybe we would have worked with tox > from the start if we had known about it). > But the tox idea is pretty good as it this is really where the testbed > should be setup rather than runtime (pytest) when things can go wrong. > > Btw, from May on i (and others) should be available through my company > merlinux for some consulting in case you need some more substantial > support than occassional comments from far away deserts :) > [Laurent] >> Actually this sounds like interesting. Let me think about > it. We are a rather small team so we can use all the help we can get. > Enjoy and drink a lot of water :) > > Cheers/Laurent > best, > holger > From mantihor at gmail.com Sun Apr 1 05:09:50 2012 From: mantihor at gmail.com (Bogdan Opanchuk) Date: Sun, 1 Apr 2012 13:09:50 +1000 Subject: [py-dev] Performance tests with py.test In-Reply-To: <20120401022159.GZ26028@merlinux.eu> References: <20120401022159.GZ26028@merlinux.eu> Message-ID: Hi Holger, On Sun, Apr 1, 2012 at 12:21 PM, holger krekel wrote: > do you want to consider performance regressions as a failure? Not really. I just need some table with performance results that I could get for different systems/versions and compare them. Besides, performance regressions can be implemented using existing functionality, because they do not have some continuous result associated with them ? only pass/fail. > Could you maybe provide a simple example test file and make up > some example output that you'd like to see? Sure. Consider the following test file: ----- import pytest def test_matrixmul(): pass @pytest.mark.perf('seconds') def test_reduce(): # # return 1.0 @pytest.mark.perf('GFLOPS') def test_fft(): # # return 1.0, 1e10 ----- Here "test_matrixmul()" is a normal pass/fail test, "test_reduce()" is marked as performance test that returns execution time, and "test_fft()" is marked as a test that returns execution time + the number of operations (thus allowing us to calculate GFLOPS value). I have put together a clunky solution (see the end of this letter) using existing hooks that gives me more or less what I want to see: $ py.test -v ... test_test.py:3: test_matrixmul PASSED test_test.py:6: test_reduce 1.0 s test_test.py:10: test_fft 10.0 GFLOPS ... The only problem here is that I have to explicitly increase verbosity level. I'd prefer 'perf' marked tests show their result even for default verbosity, but I haven't found a way to do it yet. > Meanwhile, if you haven't already you might want to look at the output > of "py.test --durations=10" and see about its implementation (mostly > contained in _pytest/runner.py, grep for 'duration'). Yes, I know about it, but it is not quite what I need: - it measures the time of the whole testcase, while I usually need to time only specific part - it does not allow me to measure anything more complicated (e.g. GFLOPS, as another variant I may want to see the error value) - it prints its report after all the tests are finished, while it is much more convenient to see testcase result as soon as it is finished (my performance tests may run for quite a long time) So, the solution I have now is shown below. "pytest_pyfunc_call()" implementation annoys me the most, because I had to copy-paste it from python.py, so it exposes some py.test internals and can easily break when something (seemingly hidden) inside the library is changed. ----- def pytest_configure(config): config.pluginmanager.register(PerfPlugin(config), '_perf') class PerfPlugin(object): def __init__(self, config): pass def pytest_pyfunc_call(self, __multicall__, pyfuncitem): # collect testcase return result testfunction = pyfuncitem.obj if pyfuncitem._isyieldedfunction(): res = testfunction(*pyfuncitem._args) else: funcargs = pyfuncitem.funcargs res = testfunction(**funcargs) pyfuncitem.result = res def pytest_report_teststatus(self, __multicall__, report): outcome, letter, msg = __multicall__.execute() # if we have some result attached to the testcase, print it instead of 'PASSED' if hasattr(report, 'result'): msg = report.result return outcome, letter, msg def pytest_runtest_makereport(self, __multicall__, item, call): report = __multicall__.execute() # if the testcase has passed, and has 'perf' marker, process its results if call.when == 'call' and report.passed and hasattr(item.function, 'perf'): perf = item.function.perf perftype = perf.args[0] if perftype == 'seconds': report.result = str(item.result) + " s" else: seconds, operations = item.result report.result = str(operations / seconds / 1e9) + " GFLOPS" return report ----- From memedough at gmail.com Thu Apr 12 10:02:27 2012 From: memedough at gmail.com (meme dough) Date: Thu, 12 Apr 2012 18:02:27 +1000 Subject: [py-dev] Slave fail with Interrupted system call? Message-ID: Hi, Anyone know why execnet slave would get Interrupted system call? creating slavegateway on <__main__.Popen2IO instance at 0x20fa2d8> RECEIVERTHREAD: starting to run received received execution starts[1]: "\nimport os\npath, nice, env = channel.receive()\ execution finished sent 1 sent channel close message received 1 channel.__del__ execution starts[3]: '\nimport sys, os\nchannel.send(dict(\n executa sent execution finished sent 3 sent channel close message received 3 channel.__del__ execution starts[5]: '"""\n This module is executed in remote subpro received RECEIVERTHREAD Traceback (most recent call last): File "", line 622, in _thread_receiver File "", line 802, in load File "", line 86, in read IOError: [Errno 4] Interrupted system call RECEIVERTHREAD entering finalization RECEIVERTHREAD leaving finalization sent sent sent sent sent sent execution finished :) From Ronny.Pfannschmidt at gmx.de Sun Apr 15 20:26:34 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Sun, 15 Apr 2012 20:26:34 +0200 Subject: [py-dev] ANN: pytest-couchdbkit 0.4 Message-ID: <4F8B12DA.4010205@gmx.de> Hello, I'm happy to announce the release of pytest-couchdbkit 0.4, which is the first release of the py.test couchdbkit integration i consider reasonably complete for others to use. It is available on http://pypi.python.org/pypi/pytest-couchdbkit/0.4 -- Ronny pytest-couchdbkit ================= pytest-couchdbkit is a simple pytest extension that manages test databases for your couchdbkit using apps. In order to use it, you only need to set the ini option `couchdbkit_suffix` to something fitting your app. Additionally you may use `couchdbkit_backend` to select the gevent/eventlet backends. To setup couchapps before running the tests, there is the `pytest_couchdbkit_push_app(server, dbname)` hook It can be used to create a pristine database, which is replicated into each test database. The provided funcarg `couchdb` will be a freshly flushed database named `pytest_` + couchdbkit_suffix, additionally, after each test item, the database will be dumped to tmpdir.join(couchdb.dump) which is a simple file having entries in the format:: number(\d+) + "\r\n" + number bytes + "\r\n" entries are: * the db info * documents * raw attachment data following the document Attachments are ordered by name, so they can be reassigned to their metadata on loading. The dump format is meant to be human-readable. Future ------ * fs fixtures (like couchapp) * code fixtures * dump fixtures * comaring a db to sets of defined fixtures CHANGELOG ========= from 0.3 to 0.4 --------------- - add pytest_couchdbkit_push_app hook From holger at merlinux.eu Mon Apr 16 04:27:42 2012 From: holger at merlinux.eu (holger krekel) Date: Mon, 16 Apr 2012 02:27:42 +0000 Subject: [py-dev] [TIP] ANN: pytest-couchdbkit 0.4 In-Reply-To: <4F8B12DA.4010205@gmx.de> References: <4F8B12DA.4010205@gmx.de> Message-ID: <20120416022741.GZ10174@merlinux.eu> Hey Ronny, Not using couchdb myself yet but that might change soon - do you have a complete usage example somewhere of how to use it or did i miss that? best, holger On Sun, Apr 15, 2012 at 20:26 +0200, Ronny Pfannschmidt wrote: > Hello, > > I'm happy to announce the release of pytest-couchdbkit 0.4, > which is the first release of the py.test couchdbkit integration i > consider reasonably complete for others to use. > > It is available on http://pypi.python.org/pypi/pytest-couchdbkit/0.4 > > -- Ronny > > pytest-couchdbkit > ================= > > pytest-couchdbkit is a simple pytest extension that manages test databases > for your couchdbkit using apps. > > In order to use it, you only need to set the ini option > `couchdbkit_suffix` to something fitting your app. > Additionally you may use `couchdbkit_backend` to select > the gevent/eventlet backends. > > > To setup couchapps before running the tests, > there is the `pytest_couchdbkit_push_app(server, dbname)` hook > > It can be used to create a pristine database, > which is replicated into each test database. > > > > The provided funcarg `couchdb` will be a freshly flushed database > named `pytest_` + couchdbkit_suffix, > additionally, after each test item, > the database will be dumped to tmpdir.join(couchdb.dump) > > which is a simple file having entries in the format:: > > number(\d+) + "\r\n" + number bytes + "\r\n" > > entries are: > > * the db info > * documents > * raw attachment data following the document > > Attachments are ordered by name, > so they can be reassigned to their metadata on loading. > > The dump format is meant to be human-readable. > > > > Future > ------ > > * fs fixtures (like couchapp) > * code fixtures > * dump fixtures > * comaring a db to sets of defined fixtures > > CHANGELOG > ========= > > from 0.3 to 0.4 > --------------- > > - add pytest_couchdbkit_push_app hook > > > _______________________________________________ > testing-in-python mailing list > testing-in-python at lists.idyll.org > http://lists.idyll.org/listinfo/testing-in-python > From tetsuya.morimoto at gmail.com Mon Apr 16 09:24:15 2012 From: tetsuya.morimoto at gmail.com (Tetsuya Morimoto) Date: Mon, 16 Apr 2012 16:24:15 +0900 Subject: [py-dev] pytest documents with Japanese translation Message-ID: Hi, I finished translating pytest documents English to Japanese. My repository is below and I made the named branch "2.2.3-ja" for translating docs. https://bitbucket.org/t2y/pytest-ja I would like to publish this Japanese translation on pytest.org with some ways. But, I don't know it's possible or not, and how to deploy the current docs. I think putting original and translated docs separately is easy deployment to avoid some conflicts. Of course, other suggestions are welcome. Additionally, I can maintain Japanese docs in the future. So, I can work by myself if you give me needed permission. thanks, Tetsuya From Ronny.Pfannschmidt at gmx.de Tue Apr 17 20:12:20 2012 From: Ronny.Pfannschmidt at gmx.de (Ronny Pfannschmidt) Date: Tue, 17 Apr 2012 20:12:20 +0200 Subject: [py-dev] ANN: pytest-couchdbkit 0.5.1 Message-ID: <4F8DB284.50206@gmx.de> Hello, i am sorry to have to release pytest-couchdbkit 0.5.1 so early i noticed that 0.4 broke normal testing way too late, which is fixed now 0.5.1 was necessary due to noticing a typo in MANIFEST.in right after the 0.5 release which silently dropped the tests from the release as a little gem i added support for pytest-xdist to pytest-couchdbkit, so happy parallel testing. -- Ronny pytest-couchdbkit ================= pytest-couchdbkit is a simple pytest extension that manages test databases for your couchdbkit using apps. In order to use it, you only need to set the ini option `couchdbkit_suffix` to something fitting your app. Additionally you may use `couchdbkit_backend` to select the gevent/eventlet backends. To setup couchapps before running the tests, there is the `pytest_couchdbkit_push_app(server, dbname)` hook It can be used to create a pristine database, which is replicated into each test database. The provided funcarg `couchdb` will be a freshly flushed database named `pytest_` + couchdbkit_suffix, additionally, after each test item, the database will be dumped to tmpdir.join(couchdb.dump) which is a simple file having entries in the format:: number(\d+) + "\r\n" + number bytes + "\r\n" entries are: * the db info * documents * raw attachment data following the document Attachments are ordered by name, so they can be reassigned to their metadata on loading. The dump format is meant to be human-readable. Future ------ * fs fixtures (like couchapp) * code fixtures * dump fixtures * comaring a db to sets of defined fixtures CHANGELOG ========= from 0.5 to 0.5.1 ----------------- - fix MANIFEST.in from 0.4 to 0.5 --------------- - fix breaking testruns that dont actually use it - add a lot of tests i should have done before 0.4 - add support for pytest-xdist, if a slave is detected, push_app wont be called, and dbname gets the gw id appended from 0.3 to 0.4 --------------- - add pytest_couchdbkit_push_app hook From holger at merlinux.eu Fri Apr 20 05:45:20 2012 From: holger at merlinux.eu (holger krekel) Date: Fri, 20 Apr 2012 03:45:20 +0000 Subject: [py-dev] pytest documents with Japanese translation In-Reply-To: References: Message-ID: <20120420034520.GG10174@merlinux.eu> Hi Tetsuya, thanks for your work already! i guess we need to work a bit to have both language versions generated. I haven't looked at your branch yet but i guess we need to solve a few issues: * how do the eventual URLs look like? Maybe pytest.org/latest-jp/...? instead of pytest.org/latest/ ? I'd definitely like to keep existing URLs working. * how to organise the repository so that both EN and JP are included (without the need to have a branch) * do we need to do anything special wrt to the examples and their generation? (we use a tool called 'regendoc' written by ronny which regenerates the example outputs) best, holger On Mon, Apr 16, 2012 at 16:24 +0900, Tetsuya Morimoto wrote: > Hi, > > I finished translating pytest documents English to Japanese. My > repository is below and I made the named branch "2.2.3-ja" for > translating docs. > > https://bitbucket.org/t2y/pytest-ja > > I would like to publish this Japanese translation on pytest.org with > some ways. But, I don't know it's possible or not, and how to deploy > the current docs. I think putting original and translated docs > separately is easy deployment to avoid some conflicts. Of course, > other suggestions are welcome. > > Additionally, I can maintain Japanese docs in the future. So, I can > work by myself if you give me needed permission. > > thanks, > Tetsuya > _______________________________________________ > py-dev mailing list > py-dev at codespeak.net > http://codespeak.net/mailman/listinfo/py-dev > From tetsuya.morimoto at gmail.com Sun Apr 22 20:43:17 2012 From: tetsuya.morimoto at gmail.com (Tetsuya Morimoto) Date: Mon, 23 Apr 2012 03:43:17 +0900 Subject: [py-dev] pytest documents with Japanese translation In-Reply-To: <20120420034520.GG10174@merlinux.eu> References: <20120420034520.GG10174@merlinux.eu> Message-ID: Hi Holger, Thanks for thinking about Japanese translation. > * how do the eventual URLs look like? Maybe pytest.org/latest-jp/...? > ?instead of pytest.org/latest/ ? > ?I'd definitely like to keep existing URLs working. It's good idea and I suggest "latest-ja" is proper than "latest-jp" since "jp" is the country code. > * how to organise the repository so that both EN and JP are included > ?(without the need to have a branch) I forked original branch for Japanese translation and made named branch to represent the version, but do you want to manage the translation in original repository? If so, I know Sphinx has i18n feature using gettext since 1.1. Is this useful for your purpose? http://sphinx.pocoo.org/latest/intl.html > * do we need to do anything special wrt to the examples and their > ?generation? (we use a tool called 'regendoc' written by ronny which > ?regenerates the example outputs) I used original example code with document, so the example's version are "pytest-2.2.2"? And then, I tried to run regen (make regen), but I got an error. It seems regenerating has an encoding issue. (sphinx)$ make regen PYTHONDONTWRITEBYTECODE=1 COLUMNS=76 regendoc --update *.txt */*.txt ===== checking apiref.txt ===== Traceback (most recent call last): File "/Users/t2y/.virtualenvs/sphinx/bin/regendoc", line 9, in load_entry_point('RegenDoc==0.2.dev1', 'console_scripts', 'regendoc')() File "/Users/t2y/work/python/regendoc/regendoc.py", line 201, in main return _main(options.files, should_update=options.update) File "/Users/t2y/work/python/regendoc/regendoc.py", line 196, in _main path.write(corrected) File "/Users/t2y/.virtualenvs/sphinx/lib/python2.7/site-packages/py/_path/local.py", line 385, in write data = py.builtin._totext(data, sys.getdefaultencoding()) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 22: ordinal not in range(128) make: *** [regen] Error 1 thanks, Tetsuya On Fri, Apr 20, 2012 at 12:45 PM, holger krekel wrote: > Hi Tetsuya, > > thanks for your work already! > > i guess we need to work a bit to have both language versions > generated. ?I haven't looked at your branch yet but i guess > we need to solve a few issues: > > * how do the eventual URLs look like? Maybe pytest.org/latest-jp/...? > ?instead of pytest.org/latest/ ? > ?I'd definitely like to keep existing URLs working. > > * how to organise the repository so that both EN and JP are included > ?(without the need to have a branch) > > * do we need to do anything special wrt to the examples and their > ?generation? (we use a tool called 'regendoc' written by ronny which > ?regenerates the example outputs) > > best, > holger > > > On Mon, Apr 16, 2012 at 16:24 +0900, Tetsuya Morimoto wrote: >> Hi, >> >> I finished translating pytest documents English to Japanese. My >> repository is below and I made the named branch "2.2.3-ja" for >> translating docs. >> >> https://bitbucket.org/t2y/pytest-ja >> >> I would like to publish this Japanese translation on pytest.org with >> some ways. But, I don't know it's possible or not, and how to deploy >> the current docs. I think putting original and translated docs >> separately is easy deployment to avoid some conflicts. Of course, >> other suggestions are welcome. >> >> Additionally, I can maintain Japanese docs in the future. So, I can >> work by myself if you give me needed permission. >> >> thanks, >> Tetsuya >> _______________________________________________ >> py-dev mailing list >> py-dev at codespeak.net >> http://codespeak.net/mailman/listinfo/py-dev >>