From sciwiseg at gmail.com Mon Oct 5 04:12:55 2020 From: sciwiseg at gmail.com (Edward Montague) Date: Mon, 5 Oct 2020 21:12:55 +1300 Subject: [SciPy-User] fftconvolve Message-ID: How might I check the results of this procedure, there are options available when evaluating this. Perhaps a one dimensional example will illustrate what's required. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schuldei at th-luebeck.de Mon Oct 5 10:39:30 2020 From: andreas.schuldei at th-luebeck.de (Schuldei, Andreas) Date: Mon, 5 Oct 2020 14:39:30 +0000 Subject: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Message-ID: I seem to trigger an internal error in scipy.optimize.minimize, somehow. This is the code that triggers the problem: import numpy as np from scipy.optimize import minimize def find_vector_of_minor_axis_from_chunk(data): n = 20 # number of points guess_center_point = data.mean(1) guess_center_point = guess_center_point[np.newaxis, :].transpose() guess_a_phase = 0.0 guess_b_phase = 0.0 guess_a_axis_vector = np.array([[1.0], [0.0], [0.0]]) guess_b_axis_vector = np.array([[0.0], [1.0], [0.0]]) p0 = np.array([guess_center_point, guess_a_axis_vector, guess_a_phase, guess_b_axis_vector, guess_b_phase]) def ellipse_func(x, data): center_point = x[0] a_axis_vector = x[1] a_phase = x[2] b_axis_vector = x[3] b_phase = x[4] t = np.linspace(0, 2 * np.pi, n) error = center_point + a_axis_vector * np.sin(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) return np.any(error_sum) popt, pcov = minimize(ellipse_func, p0, args=data) center_point, a_axis_vector, a_phase, b_axis_vector, b_phase = popt print(str(a_axis_vector + ", " + b_axis_vector)) shorter_vector = a_axis_vector if np.abs(a_axis_vector) > np.aps(b_axis_vector): shorter_vector = b_axis_vector return shorter_vector def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) print(str(find_vector_of_minor_axis_from_chunk(data))) if __name__ == '__main__': main() and this is the traceback for it: "C:\Users\X\PycharmProjects\lissajous-achse\venv\Scripts\python.exe" "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py" Traceback (most recent call last): File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 57, in main() File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 53, in main print(str(find_vector_of_minor_axis_from_chunk(data))) File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 29, in find_vector_of_minor_axis_from_chunk popt, pcov = minimize(ellipse_func, p0, args=data) File "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\_minimize.py", line 604, in minimize return _minimize_bfgs(fun, x0, args, jac, callback, **options) File "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\optimize.py", line 1063, in _minimize_bfgs if isinf(rhok): # this is patch for numpy ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Process finished with exit code 1 I tried to take the suggestion in the traceback (to use `np.any()` or `np.all()`) on the return value of the `ellipse_func`, but just got the next internal scipy error. What can I do to get my optimization running? I am open to using other functions besides `minimize()`. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 5 11:26:18 2020 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 5 Oct 2020 11:26:18 -0400 Subject: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() In-Reply-To: References: Message-ID: You can and should remove the `np.any()` from `ellipse_func()`; that's not what the error message is asking for. That line is expecting `rhok` to be a scalar and thus `isinf(rhok)` to be a scalar boolean. Because of an error in the construction of the parameter vector, `rhok` ended up being an array rather than a scalar, making `isinf(rhok)` a boolean array, which does not have a consistent truth value that can be used in the `if` statement. We make a recommendation in the error message that is usually helpful for the common mistakes that end up like this, at least for people who are writing the `if` statement. However, you are just using the function that has this `if` statement, and the advice does not apply to the code outside of the `if` statement. Instead, you need to make the parameter vector `x` (and `p0`, of course) a 1D vector of floats, not an object array mixing arrays and scalars. All of the minimizers are expecting functions that map 1D parameter vectors of floats to scalars. You will have to pack and unpack this parameter vector to get the different vectors and scalars you use to compute the error in `ellipse_func()`. On Mon, Oct 5, 2020 at 10:56 AM Schuldei, Andreas < andreas.schuldei at th-luebeck.de> wrote: > I seem to trigger an internal error in scipy.optimize.minimize, somehow. > > > This is the code that triggers the problem: > > > import numpy as np > from scipy.optimize import minimize > > > def find_vector_of_minor_axis_from_chunk(data): > n = 20 # number of points > guess_center_point = data.mean(1) > guess_center_point = guess_center_point[np.newaxis, :].transpose() > guess_a_phase = 0.0 > guess_b_phase = 0.0 > guess_a_axis_vector = np.array([[1.0], [0.0], [0.0]]) > guess_b_axis_vector = np.array([[0.0], [1.0], [0.0]]) > > p0 = np.array([guess_center_point, > guess_a_axis_vector, guess_a_phase, > guess_b_axis_vector, guess_b_phase]) > > def ellipse_func(x, data): > center_point = x[0] > a_axis_vector = x[1] > a_phase = x[2] > b_axis_vector = x[3] > b_phase = x[4] > t = np.linspace(0, 2 * np.pi, n) > error = center_point + a_axis_vector * np.sin(t * a_phase) + > b_axis_vector * np.sin(t + b_phase) - data > error_sum = np.sum(error ** 2) > return np.any(error_sum) > > popt, pcov = minimize(ellipse_func, p0, args=data) > center_point, a_axis_vector, a_phase, b_axis_vector, b_phase = popt > > print(str(a_axis_vector + ", " + b_axis_vector)) > shorter_vector = a_axis_vector > if np.abs(a_axis_vector) > np.aps(b_axis_vector): > shorter_vector = b_axis_vector > return shorter_vector > > > def main(): > data = np.array([[-4.62767933, -4.6275775, -4.62735346, > -4.62719652, -4.62711625, -4.62717975, > -4.62723845, -4.62722407, -4.62713901, > -4.62708749, -4.62703238, -4.62689101, > -4.62687185, -4.62694013, -4.62701082, > -4.62700483, -4.62697488, -4.62686825, > -4.62675683, -4.62675204], > [-1.58625998, -1.58625039, -1.58619648, > -1.58617611, -1.58620606, -1.5861833, > -1.5861821, -1.58619169, -1.58615814, > -1.58616893, -1.58613179, -1.58615934, > -1.58611262, -1.58610782, -1.58613179, > -1.58614017, -1.58613059, -1.58612699, > -1.58607428, -1.58610183], > [-0.96714786, -0.96713827, -0.96715984, > -0.96715145, -0.96716703, -0.96712869, > -0.96716104, -0.96713228, -0.96719698, > -0.9671838, -0.96717062, -0.96717062, > -0.96715744, -0.96707717, -0.96709275, > -0.96706519, -0.96715026, -0.96711791, > -0.96713588, -0.96714786]]) > > print(str(find_vector_of_minor_axis_from_chunk(data))) > > > if __name__ == '__main__': > main() > > and this is the traceback for it: > > "C:\Users\X\PycharmProjects\lissajous-achse\venv\Scripts\python.exe" > "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py" > Traceback (most recent call last): > File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", > line 57, in > main() > File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", > line 53, in main > print(str(find_vector_of_minor_axis_from_chunk(data))) > File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", > line 29, in find_vector_of_minor_axis_from_chunk > popt, pcov = minimize(ellipse_func, p0, args=data) > File > "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\_minimize.py", > line 604, in minimize > return _minimize_bfgs(fun, x0, args, jac, callback, **options) > File > "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\optimize.py", > line 1063, in _minimize_bfgs > if isinf(rhok): # this is patch for numpy > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > Process finished with exit code 1 > > I tried to take the suggestion in the traceback (to use `np.any()` or > `np.all()`) on the return value of the `ellipse_func`, but just got the > next internal scipy error. > > What can I do to get my optimization running? I am open to using other > functions besides `minimize()`. > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schuldei at th-luebeck.de Mon Oct 5 15:22:19 2020 From: andreas.schuldei at th-luebeck.de (Schuldei, Andreas) Date: Mon, 5 Oct 2020 19:22:19 +0000 Subject: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() In-Reply-To: References: , Message-ID: thank you for your help. Now i rewrote the packing and unpacking and it looks like this: def find_vector_of_minor_axis_from_chunk(data): n = 20 # number of points center_point = data.mean(1) guess_center_point_x = center_point[0] guess_center_point_y = center_point[1] guess_center_point_z = center_point[2] guess_a_axis_vector_x = 1.0 guess_a_axis_vector_y = 0.0 guess_a_axis_vector_z = 0.0 guess_b_axis_vector_x = 0.0 guess_b_axis_vector_y = 1.0 guess_b_axis_vector_z = 0.0 guess_a_phase = 0.0 guess_b_phase = 0.0 p0 = np.array([guess_center_point_x, guess_center_point_y, guess_center_point_z, guess_a_axis_vector_x, guess_a_axis_vector_y, guess_a_axis_vector_z, guess_b_axis_vector_x, guess_b_axis_vector_y, guess_b_axis_vector_z, guess_a_phase, guess_b_phase]) def ellipse_func(x, data): center_point = np.array([[x[0]], [x[1]], [x[2]]]) a_axis_vector = np.array([[x[3]], [x[4]], [x[5]]]) b_axis_vector = np.array([[x[6]], [x[7]], [x[8]]]) a_phase = x[9] b_phase = x[10] t = np.linspace(0, 2 * np.pi, n) error = center_point + a_axis_vector * np.sin(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) return error_sum res = minimize(ellipse_func, p0, args=data) center_point_x, center_point_y, center_point_z, a_axis_vector_x, a_axis_vector_y, a_axis_vector_z, b_axis_vector_x, b_axis_vector_y, b_axis_vector_z, a_phase, b_phase = res.x a_axis_vector = np.array([a_axis_vector_x, a_axis_vector_y, a_axis_vector_z]) b_axis_vector = np.array([b_axis_vector_x, b_axis_vector_y, b_axis_vector_z]) print(str(res.x)) shorter_vector = a_axis_vector if np.all(np.abs(a_axis_vector) > np.abs(b_axis_vector)): shorter_vector = b_axis_vector return shorter_vector (just to leave something working for posterity.) Is this as elegant as it gets? ________________________________ Von: SciPy-User im Auftrag von Robert Kern Gesendet: Montag, 5. Oktober 2020 17:26:18 An: SciPy Users List Betreff: Re: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() You can and should remove the `np.any()` from `ellipse_func()`; that's not what the error message is asking for. That line is expecting `rhok` to be a scalar and thus `isinf(rhok)` to be a scalar boolean. Because of an error in the construction of the parameter vector, `rhok` ended up being an array rather than a scalar, making `isinf(rhok)` a boolean array, which does not have a consistent truth value that can be used in the `if` statement. We make a recommendation in the error message that is usually helpful for the common mistakes that end up like this, at least for people who are writing the `if` statement. However, you are just using the function that has this `if` statement, and the advice does not apply to the code outside of the `if` statement. Instead, you need to make the parameter vector `x` (and `p0`, of course) a 1D vector of floats, not an object array mixing arrays and scalars. All of the minimizers are expecting functions that map 1D parameter vectors of floats to scalars. You will have to pack and unpack this parameter vector to get the different vectors and scalars you use to compute the error in `ellipse_func()`. On Mon, Oct 5, 2020 at 10:56 AM Schuldei, Andreas > wrote: I seem to trigger an internal error in scipy.optimize.minimize, somehow. This is the code that triggers the problem: import numpy as np from scipy.optimize import minimize def find_vector_of_minor_axis_from_chunk(data): n = 20 # number of points guess_center_point = data.mean(1) guess_center_point = guess_center_point[np.newaxis, :].transpose() guess_a_phase = 0.0 guess_b_phase = 0.0 guess_a_axis_vector = np.array([[1.0], [0.0], [0.0]]) guess_b_axis_vector = np.array([[0.0], [1.0], [0.0]]) p0 = np.array([guess_center_point, guess_a_axis_vector, guess_a_phase, guess_b_axis_vector, guess_b_phase]) def ellipse_func(x, data): center_point = x[0] a_axis_vector = x[1] a_phase = x[2] b_axis_vector = x[3] b_phase = x[4] t = np.linspace(0, 2 * np.pi, n) error = center_point + a_axis_vector * np.sin(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) return np.any(error_sum) popt, pcov = minimize(ellipse_func, p0, args=data) center_point, a_axis_vector, a_phase, b_axis_vector, b_phase = popt print(str(a_axis_vector + ", " + b_axis_vector)) shorter_vector = a_axis_vector if np.abs(a_axis_vector) > np.aps(b_axis_vector): shorter_vector = b_axis_vector return shorter_vector def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) print(str(find_vector_of_minor_axis_from_chunk(data))) if __name__ == '__main__': main() and this is the traceback for it: "C:\Users\X\PycharmProjects\lissajous-achse\venv\Scripts\python.exe" "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py" Traceback (most recent call last): File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 57, in main() File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 53, in main print(str(find_vector_of_minor_axis_from_chunk(data))) File "C:/Users/X/PycharmProjects/lissajous-achse/ellipse_fit.py", line 29, in find_vector_of_minor_axis_from_chunk popt, pcov = minimize(ellipse_func, p0, args=data) File "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\_minimize.py", line 604, in minimize return _minimize_bfgs(fun, x0, args, jac, callback, **options) File "C:\Users\X\PycharmProjects\lissajous-achse\venv\lib\site-packages\scipy\optimize\optimize.py", line 1063, in _minimize_bfgs if isinf(rhok): # this is patch for numpy ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() Process finished with exit code 1 I tried to take the suggestion in the traceback (to use `np.any()` or `np.all()`) on the return value of the `ellipse_func`, but just got the next internal scipy error. What can I do to get my optimization running? I am open to using other functions besides `minimize()`. _______________________________________________ SciPy-User mailing list SciPy-User at python.org https://mail.python.org/mailman/listinfo/scipy-user -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 5 16:10:47 2020 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 5 Oct 2020 16:10:47 -0400 Subject: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() In-Reply-To: References: Message-ID: On Mon, Oct 5, 2020 at 3:23 PM Schuldei, Andreas < andreas.schuldei at th-luebeck.de> wrote: > thank you for your help. Now i rewrote the packing and unpacking and it > looks like this: > > (just to leave something working for posterity.) Is this as elegant as it > gets? > I would probably rearrange `data` to be (n, 3)-shaped so that the 3-vectors can remain (3,)-shaped instead of (3,1)-shaped (also, first axis being the "observation" axis is pretty conventional). Then the packing and unpacking get a little simpler. assert data.shape == (n, 3) center_point = data.mean(axis=0) guess_a_axis_vector = np.array([1.0, 0.0, 0.0]) guess_b_axis_vector = np.array([0.0, 1.0, 0.0]) guess_phases = np.array([0.0, 0.0]) p0 = np.hstack([center_point, guess_a_axis_vector, guess_b_axis_vector, guess_phases]) def ellipse_func(x, data): center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = x[6:9] a_phase, b_phase = x[9:11] t = ... error = center_point + ... - data error_sum = np.sum(error ** 2) return error_sum -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From sciwiseg at gmail.com Tue Oct 6 13:04:15 2020 From: sciwiseg at gmail.com (Edward Montague) Date: Wed, 7 Oct 2020 06:04:15 +1300 Subject: [SciPy-User] _minimize_bfgs throws error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() In-Reply-To: References: Message-ID: Apologies. Having some difficulty with login, insecure login blocked. On Tue, Oct 6, 2020 at 9:12 AM Robert Kern wrote: > On Mon, Oct 5, 2020 at 3:23 PM Schuldei, Andreas < > andreas.schuldei at th-luebeck.de> wrote: > >> thank you for your help. Now i rewrote the packing and unpacking and it >> looks like this: >> >> (just to leave something working for posterity.) Is this as elegant as it >> gets? >> > I would probably rearrange `data` to be (n, 3)-shaped so that the > 3-vectors can remain (3,)-shaped instead of (3,1)-shaped (also, first axis > being the "observation" axis is pretty conventional). Then the packing and > unpacking get a little simpler. > > assert data.shape == (n, 3) > center_point = data.mean(axis=0) > guess_a_axis_vector = np.array([1.0, 0.0, 0.0]) > guess_b_axis_vector = np.array([0.0, 1.0, 0.0]) > guess_phases = np.array([0.0, 0.0]) > p0 = np.hstack([center_point, guess_a_axis_vector, guess_b_axis_vector, > guess_phases]) > > def ellipse_func(x, data): > center_point = x[0:3] > a_axis_vector = x[3:6] > b_axis_vector = x[6:9] > a_phase, b_phase = x[9:11] > t = ... > error = center_point + ... - data > error_sum = np.sum(error ** 2) > return error_sum > > -- > Robert Kern > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schuldei at th-luebeck.de Wed Oct 7 05:28:42 2020 From: andreas.schuldei at th-luebeck.de (Schuldei, Andreas) Date: Wed, 7 Oct 2020 09:28:42 +0000 Subject: [SciPy-User] numeric deformation when using minimize() - looking for good constrains, boundaries or just a clever way to express the problem Message-ID: <25db1d82a2ce4788baeff6cc23ba3e33@th-luebeck.de> import numpy as np from scipy.optimize import minimize def find_ellipse_from_chunk(data): n = data.shape[0] # number of points assert data.shape == (n, 3) guess_center_point = data.mean(axis=0) guess_a_axis_vector = np.array([0.0, 0.0, 1.0]) guess_b_axis_vector = np.array([0.0, 1.0]) guess_phases = np.array([0.0, 0.0]) p0 = np.hstack([guess_center_point, guess_a_axis_vector, guess_b_axis_vector, guess_phases]) def ellipse_func(x, data): center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = np.hstack((x[6], x[7], (x[3] * x[6] + x[4] * x[7]) / (-x[5]))) a_phase, b_phase = x[8:10] t = np.linspace(0, (n / 20) * 2 * np.pi, n)[np.newaxis].T # this must be transposed, so the math adds up error = center_point + a_axis_vector * np.cos(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) + np.sum(error ** 2) * np.dot(a_axis_vector, b_axis_vector) return error_sum res = minimize(ellipse_func, p0, args=data) x = res.x result = np.hstack((x[0:8], (x[3] * x[6] + x[4] * x[7]) / (-x[5]), x[8:10])) return result def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) x = find_ellipse_from_chunk(data.T) center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = x[6:9] a_phase, b_phase = x[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) if __name__ == '__main__': main() This is the algorithm that displays the weirdness. I am trying to fit my data to ellipses. The weired form of the b_axis_vector is a mathematical attempt to make the a_axis_vector and b_axis_vector, which should represent the major and minor axis of the ellipse, stand orthogonal on each other. But apparently this form of expressing the vector has a bad effect on the optimization algorithm, because the resulting vectors for the major and minor axis are rather unbalanced: a: [-4.93203143e-07 -2.14349197e-05 5.00000007e-01] b: [-1.77464077e-04 -4.05250071e-05 -1.91235220e-09] always, the third dimensions of the vectors is much bigger (for vector a) or smaller (for vector b) then the other components and the resulting ellipse ends up as a line. What are ways that are more accomodating to numerical optimization for orthogonal vectors? How could constrictions look that would help? I really would appriciate helpful pointers, since this is clearly not my core expertise! This is a version of the code that has a simple 3d plot that is helpful to view the data and the fitted ellipse: import numpy as np from scipy.optimize import minimize from matplotlib.pyplot import cm import matplotlib.pyplot as plt import matplotlib as mpl def find_ellipse_from_chunk(data): n = data.shape[0] # number of points assert data.shape == (n, 3) guess_center_point = data.mean(axis=0) guess_a_axis_vector = np.array([0.0, 0.0, 1.0]) guess_b_axis_vector = np.array([0.0, 1.0]) guess_phases = np.array([0.0, 0.0]) p0 = np.hstack([guess_center_point, guess_a_axis_vector, guess_b_axis_vector, guess_phases]) def ellipse_func(x, data): center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = np.hstack((x[6], x[7], (x[3] * x[6] + x[4] * x[7]) / (-x[5]))) a_phase, b_phase = x[8:10] t = np.linspace(0, (n / 20) * 2 * np.pi, n)[np.newaxis].T # this must be transposed, so the math adds up error = center_point + a_axis_vector * np.cos(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) + np.sum(error ** 2) * np.dot(a_axis_vector, b_axis_vector) return error_sum res = minimize(ellipse_func, p0, args=data) x = res.x result = np.hstack((x[0:8], (x[3] * x[6] + x[4] * x[7]) / (-x[5]), x[8:10])) return result def calculate_ellipses(ellipse, points): calculated_ellipse = np.zeros((points, 3)) center_point = ellipse[0:3] a_axis_vector = ellipse[3:6] b_axis_vector = ellipse[6:9] a_phase, b_phase = ellipse[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) t = np.linspace(0, (points / 20) * 2 * np.pi, points)[np.newaxis].T calculated_ellipse = center_point + a_axis_vector * np.cos( t * a_phase) + b_axis_vector * np.sin(t + b_phase) return calculated_ellipse.transpose() def plot_sensor_in_3d(data, calculated_ellipse): fig_3d = plt.figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') mpl.rcParams['legend.fontsize'] = 10 ax_3d = fig_3d.gca(projection='3d') title = "3D representation of magnetic field vector over 20ms \nfor " # + self.filestem fig_3d.suptitle(title) plt.xlabel('x-direction of magnetic flux density [10uT]') plt.ylabel('y-direction of magnetic flux density [10uT]') color = iter(cm.rainbow(np.linspace(0, 1, 2))) c = next(color) ax_3d.scatter(data[0, :], data[1, :], data[2, :], s=10, color=c) c = next(color) ax_3d.plot(calculated_ellipse[0, :], calculated_ellipse[1, :], calculated_ellipse[2, :], color=c) plt.show() def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) x = find_ellipse_from_chunk(data.T) print(str(x)) center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = x[6:9] a_phase, b_phase = x[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) calculated_ellipse = calculate_ellipses(x, data.shape[1]) plot_sensor_in_3d(data, calculated_ellipse) if __name__ == '__main__': main() -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schuldei at th-luebeck.de Thu Oct 8 02:21:43 2020 From: andreas.schuldei at th-luebeck.de (Schuldei, Andreas) Date: Thu, 8 Oct 2020 06:21:43 +0000 Subject: [SciPy-User] numeric deformation when using minimize() - looking for good constrains, boundaries or just a clever way to express the problem In-Reply-To: <25db1d82a2ce4788baeff6cc23ba3e33@th-luebeck.de> References: <25db1d82a2ce4788baeff6cc23ba3e33@th-luebeck.de> Message-ID: <5c546562f7674a95aa45e6f3ac80201b@th-luebeck.de> Not sure if anyone cares, but it was not a fundamental algorithmic problem in need of clever constraints but a typo: error = center_point + a_axis_vector * np.cos(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data here i multiply t with a_phase. A phase difference however is added to the time in a wave function, not multiplied with it. if you change this into error = center_point + a_axis_vector * np.cos(t + a_phase) + b_axis_vector * np.sin(t + b_phase) - data it works as expected. ________________________________ Von: Schuldei, Andreas Gesendet: Mittwoch, 7. Oktober 2020 11:28:42 An: SciPy Users List Betreff: numeric deformation when using minimize() - looking for good constrains, boundaries or just a clever way to express the problem import numpy as np from scipy.optimize import minimize def find_ellipse_from_chunk(data): n = data.shape[0] # number of points assert data.shape == (n, 3) guess_center_point = data.mean(axis=0) guess_a_axis_vector = np.array([0.0, 0.0, 1.0]) guess_b_axis_vector = np.array([0.0, 1.0]) guess_phases = np.array([0.0, 0.0]) p0 = np.hstack([guess_center_point, guess_a_axis_vector, guess_b_axis_vector, guess_phases]) def ellipse_func(x, data): center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = np.hstack((x[6], x[7], (x[3] * x[6] + x[4] * x[7]) / (-x[5]))) a_phase, b_phase = x[8:10] t = np.linspace(0, (n / 20) * 2 * np.pi, n)[np.newaxis].T # this must be transposed, so the math adds up error = center_point + a_axis_vector * np.cos(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) + np.sum(error ** 2) * np.dot(a_axis_vector, b_axis_vector) return error_sum res = minimize(ellipse_func, p0, args=data) x = res.x result = np.hstack((x[0:8], (x[3] * x[6] + x[4] * x[7]) / (-x[5]), x[8:10])) return result def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) x = find_ellipse_from_chunk(data.T) center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = x[6:9] a_phase, b_phase = x[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) if __name__ == '__main__': main() This is the algorithm that displays the weirdness. I am trying to fit my data to ellipses. The weired form of the b_axis_vector is a mathematical attempt to make the a_axis_vector and b_axis_vector, which should represent the major and minor axis of the ellipse, stand orthogonal on each other. But apparently this form of expressing the vector has a bad effect on the optimization algorithm, because the resulting vectors for the major and minor axis are rather unbalanced: a: [-4.93203143e-07 -2.14349197e-05 5.00000007e-01] b: [-1.77464077e-04 -4.05250071e-05 -1.91235220e-09] always, the third dimensions of the vectors is much bigger (for vector a) or smaller (for vector b) then the other components and the resulting ellipse ends up as a line. What are ways that are more accomodating to numerical optimization for orthogonal vectors? How could constrictions look that would help? I really would appriciate helpful pointers, since this is clearly not my core expertise! This is a version of the code that has a simple 3d plot that is helpful to view the data and the fitted ellipse: import numpy as np from scipy.optimize import minimize from matplotlib.pyplot import cm import matplotlib.pyplot as plt import matplotlib as mpl def find_ellipse_from_chunk(data): n = data.shape[0] # number of points assert data.shape == (n, 3) guess_center_point = data.mean(axis=0) guess_a_axis_vector = np.array([0.0, 0.0, 1.0]) guess_b_axis_vector = np.array([0.0, 1.0]) guess_phases = np.array([0.0, 0.0]) p0 = np.hstack([guess_center_point, guess_a_axis_vector, guess_b_axis_vector, guess_phases]) def ellipse_func(x, data): center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = np.hstack((x[6], x[7], (x[3] * x[6] + x[4] * x[7]) / (-x[5]))) a_phase, b_phase = x[8:10] t = np.linspace(0, (n / 20) * 2 * np.pi, n)[np.newaxis].T # this must be transposed, so the math adds up error = center_point + a_axis_vector * np.cos(t * a_phase) + b_axis_vector * np.sin(t + b_phase) - data error_sum = np.sum(error ** 2) + np.sum(error ** 2) * np.dot(a_axis_vector, b_axis_vector) return error_sum res = minimize(ellipse_func, p0, args=data) x = res.x result = np.hstack((x[0:8], (x[3] * x[6] + x[4] * x[7]) / (-x[5]), x[8:10])) return result def calculate_ellipses(ellipse, points): calculated_ellipse = np.zeros((points, 3)) center_point = ellipse[0:3] a_axis_vector = ellipse[3:6] b_axis_vector = ellipse[6:9] a_phase, b_phase = ellipse[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) t = np.linspace(0, (points / 20) * 2 * np.pi, points)[np.newaxis].T calculated_ellipse = center_point + a_axis_vector * np.cos( t * a_phase) + b_axis_vector * np.sin(t + b_phase) return calculated_ellipse.transpose() def plot_sensor_in_3d(data, calculated_ellipse): fig_3d = plt.figure(num=None, figsize=(10, 10), dpi=80, facecolor='w', edgecolor='k') mpl.rcParams['legend.fontsize'] = 10 ax_3d = fig_3d.gca(projection='3d') title = "3D representation of magnetic field vector over 20ms \nfor " # + self.filestem fig_3d.suptitle(title) plt.xlabel('x-direction of magnetic flux density [10uT]') plt.ylabel('y-direction of magnetic flux density [10uT]') color = iter(cm.rainbow(np.linspace(0, 1, 2))) c = next(color) ax_3d.scatter(data[0, :], data[1, :], data[2, :], s=10, color=c) c = next(color) ax_3d.plot(calculated_ellipse[0, :], calculated_ellipse[1, :], calculated_ellipse[2, :], color=c) plt.show() def main(): data = np.array([[-4.62767933, -4.6275775, -4.62735346, -4.62719652, -4.62711625, -4.62717975, -4.62723845, -4.62722407, -4.62713901, -4.62708749, -4.62703238, -4.62689101, -4.62687185, -4.62694013, -4.62701082, -4.62700483, -4.62697488, -4.62686825, -4.62675683, -4.62675204], [-1.58625998, -1.58625039, -1.58619648, -1.58617611, -1.58620606, -1.5861833, -1.5861821, -1.58619169, -1.58615814, -1.58616893, -1.58613179, -1.58615934, -1.58611262, -1.58610782, -1.58613179, -1.58614017, -1.58613059, -1.58612699, -1.58607428, -1.58610183], [-0.96714786, -0.96713827, -0.96715984, -0.96715145, -0.96716703, -0.96712869, -0.96716104, -0.96713228, -0.96719698, -0.9671838, -0.96717062, -0.96717062, -0.96715744, -0.96707717, -0.96709275, -0.96706519, -0.96715026, -0.96711791, -0.96713588, -0.96714786]]) x = find_ellipse_from_chunk(data.T) print(str(x)) center_point = x[0:3] a_axis_vector = x[3:6] b_axis_vector = x[6:9] a_phase, b_phase = x[9:11] print("a: " + str(a_axis_vector) + " b: " + str(b_axis_vector)) calculated_ellipse = calculate_ellipses(x, data.shape[1]) plot_sensor_in_3d(data, calculated_ellipse) if __name__ == '__main__': main() -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongyi.zhao at gmail.com Tue Oct 27 09:19:30 2020 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Tue, 27 Oct 2020 21:19:30 +0800 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. Message-ID: Hi, I try to convert between Euler angle and Quaternion representations. But I obtained different results when using scipy and the online converter (https://quaternions.online/). See following for detailed info: The results obtained from scipy: In [14]: from scipy.spatial.transform import Rotation as R In [15]: r = R.from_euler('xyz', [90, 45, 30], degrees=True) In [17]: r.as_quat() Out[17]: array([ 0.56098553, 0.43045933, -0.09229596, 0.70105738]) The results given by the online converter (https://quaternions.online/): w: 0.561 x: 0.701 y: 0.092 z: 0.430 Any hints for this problem will be highly appreciated? Regards, HY -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials, Xingtai Polytechnic College NO. 552 North Gangtie Road, Xingtai, China From erik.tollerud at gmail.com Tue Oct 27 14:18:16 2020 From: erik.tollerud at gmail.com (Erik Tollerud) Date: Tue, 27 Oct 2020 14:18:16 -0400 Subject: [SciPy-User] ANN: Astropy v4.1 released Message-ID: Dear colleagues, We are very happy to announce the v4.1 release of the Astropy package, a core Python package for Astronomy: http://www.astropy.org Astropy is a community-driven Python package intended to contain much of the core functionality and common tools needed for astronomy and astrophysics. It is part of the Astropy Project, which aims to foster an ecosystem of interoperable astronomy packages for Python. New and improved major functionality in this release includes: * A new SpectralCoord class for representing and transforming spectral quantities * Support for writing Dask arrays to FITS files * Added True Equator Mean Equinox (TEME) frame for satellite two-line ephemeris data * Support for in-place setting of array-valued SkyCoord and frame objects * Change in the definition of equality comparison for coordinate classes * Support use of SkyCoord in table vstack, dstack, and insert_row * Support for table cross-match join with SkyCoord or N-d columns * Support for custom attributes in Table subclasses * Added a new Time subformat unix_tai * Added support for the -TAB convention in FITS WCS * Support for replacing submodels in CompoundModel * Support for units on otherwise unitless models via the Model.coerce_units method. * Support for ASDF serialization of models In addition, hundreds of smaller improvements and fixes have been made. An overview of the changes is provided at: http://docs.astropy.org/en/stable/whatsnew/4.1.html Instructions for installing Astropy are provided on our website, and extensive documentation can be found at: http://docs.astropy.org If you usually use pip/vanilla Python, you can do: pip install astropy --upgrade If you make use of the Anaconda Python Distribution, soon you will be able update to Astropy v4.1 with: conda update astropy Or if you cannot wait for Anaconda to update their default version, you can use the astropy channel: conda update -c astropy astropy Please report any issues, or request new features via our GitHub repository: https://github.com/astropy/astropy/issues Nearly 400 developers have contributed code to Astropy so far, and you can find out more about the team behind Astropy here: https://www.astropy.org/team.html The LTS (Long Term Support) version of Astropy at the time of v4.1's release is v4.0 - this version will be maintained until next LTS release (v5.0, scheduled for Fall 2021). Additionally, note that the Astropy 4.x series only supports Python 3. Python 2 users can continue to use the 2.x series but it is no longer supported (as Python 2 itself is no longer supported). For assistance converting Python 2 code to Python 3, see the Python 3 for scientists conversion guide. If you use Astropy directly for your work, or as a dependency to another package, please remember to acknowledge it by citing the appropriate Astropy paper. For the most up-to-date suggestions, see the acknowledgement page, but as of this release the recommendation is: This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2018). We hope that you enjoy using Astropy as much as we enjoyed developing it! Erik Tollerud v4.1 Release Coordinator on behalf of The Astropy Project https://www.astropy.org/announcements/release-4.1.html From guillaume at damcb.com Wed Oct 28 04:27:47 2020 From: guillaume at damcb.com (Guillaume Gay) Date: Wed, 28 Oct 2020 09:27:47 +0100 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. In-Reply-To: References: Message-ID: Hi, Hongyi, This seems to have to do first with extrinsic vs intrinsic order, (the website seems to assume intrinsic angles). Thus with "XYZ" as first argument, the result are still different but that difference is consistent (which is not the case for e.g. `r = R.from_euler('xyz', [30, 45, 30], degrees=True)` which gives different values all together on your website / with scipy. r = R.from_euler('XYZ', [90, 45, 30], degrees=True) r.as_quat() array([0.70105738, 0.09229596, 0.43045933, 0.56098553]) the result is now a permutation away from what the website says. I'd test with 'obvious' cases to get to the bottom of that, but I mostly think one should trust scipy to do "the right thing" as it has had more scrutiny than the website. Hope this helps Le 27/10/2020 ? 14:19, Hongyi Zhao a ?crit?: > Hi, > > I try to convert between Euler angle and Quaternion representations. > But I obtained different results when using scipy and the online > converter (https://quaternions.online/). See following for detailed > info: > > The results obtained from scipy: > > In [14]: from scipy.spatial.transform import Rotation as R > > In [15]: r = R.from_euler('xyz', [90, 45, 30], degrees=True) > In [17]: r.as_quat() > Out[17]: array([ 0.56098553, 0.43045933, -0.09229596, 0.70105738]) > > The results given by the online converter (https://quaternions.online/): > > w: 0.561 > x: 0.701 > y: 0.092 > z: 0.430 > > Any hints for this problem will be highly appreciated? > > Regards, > HY From hongyi.zhao at gmail.com Wed Oct 28 04:55:18 2020 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Wed, 28 Oct 2020 16:55:18 +0800 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. In-Reply-To: References: Message-ID: On Wed, Oct 28, 2020 at 4:29 PM Guillaume Gay wrote: > > Hi, Hongyi, > > > This seems to have to do first with extrinsic vs intrinsic order, (the > website seems to assume intrinsic angles). Where are the origins of the coordinates and the positions of the three coordinate axes for extrinsic and intrinsic order, respectively? > Thus with "XYZ" as first > argument, the result are still different but that difference is > consistent (which is not the case for e.g. `r = R.from_euler('xyz', [30, > 45, 30], degrees=True)` which gives different values all together on > your website / with scipy. > > > r = R.from_euler('XYZ', [90, 45, 30], degrees=True) > > r.as_quat() > > array([0.70105738, 0.09229596, 0.43045933, 0.56098553]) > > the result is now a permutation away from what the website says. > > > I'd test with 'obvious' cases to get to the bottom of that, but I mostly > think one should trust scipy to do "the right thing" as it has had more > scrutiny than the website. Thanks a lot for your suggestions and experiences. Regards, HY > > > Hope this helps > > > Le 27/10/2020 ? 14:19, Hongyi Zhao a ?crit : > > Hi, > > > > I try to convert between Euler angle and Quaternion representations. > > But I obtained different results when using scipy and the online > > converter (https://quaternions.online/). See following for detailed > > info: > > > > The results obtained from scipy: > > > > In [14]: from scipy.spatial.transform import Rotation as R > > > > In [15]: r = R.from_euler('xyz', [90, 45, 30], degrees=True) > > In [17]: r.as_quat() > > Out[17]: array([ 0.56098553, 0.43045933, -0.09229596, 0.70105738]) > > > > The results given by the online converter (https://quaternions.online/): > > > > w: 0.561 > > x: 0.701 > > y: 0.092 > > z: 0.430 > > > > Any hints for this problem will be highly appreciated? > > > > Regards, > > HY > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials, Xingtai Polytechnic College NO. 552 North Gangtie Road, Xingtai, China From hongyi.zhao at gmail.com Wed Oct 28 05:49:30 2020 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Wed, 28 Oct 2020 17:49:30 +0800 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. In-Reply-To: References: Message-ID: On Wed, Oct 28, 2020 at 4:29 PM Guillaume Gay wrote: > > Hi, Hongyi, > > > This seems to have to do first with extrinsic vs intrinsic order, (the > website seems to assume intrinsic angles). Thus with "XYZ" as first > argument, the result are still different but that difference is > consistent (which is not the case for e.g. `r = R.from_euler('xyz', [30, > 45, 30], degrees=True)` which gives different values all together on > your website / with scipy. > > > r = R.from_euler('XYZ', [90, 45, 30], degrees=True) > > r.as_quat() > > array([0.70105738, 0.09229596, 0.43045933, 0.56098553]) I also find another website located at for doing the similar job. The results given by this website is as follows for the Euler angles in degrees [90, 45, 30]: 0.560985526796931 0.43045933457687935 -0.560985526796931 0.43045933457687935 As you can see, this result is different from the ones obtained by both extrinsic and intrinsic order of scipy shown as following: extrinsic: [0.7010573846499779, 0.5609855267969309, 0.4304593345768794, -0.09229595564125723] intrinsic: [0.560985526796931, 0.7010573846499778, 0.09229595564125731, 0.43045933457687935] The codes used by me is as following: from scipy.spatial.transform import Rotation as R from squaternion import Quaternion euler_angle = [90, 45, 30] q = Quaternion.from_euler(euler_angle[0], euler_angle[1], euler_angle[2], degrees=True) extrinsic_r = R.from_euler('xyz', euler_angle[:], degrees=True) intrinsic_r = R.from_euler('XYZ', euler_angle[:], degrees=True) extrinsic_quat = extrinsic_r.as_quat() intrinsic_quat = intrinsic_r.as_quat() sq_quat = [] sq_quat.extend([q.w, q.x, q.y, q.z]) ex_quat = [extrinsic_quat[3]] ex_quat.extend(extrinsic_quat[:3]) in_quat = [intrinsic_quat[3]] in_quat.extend(intrinsic_quat[:3]) print(sq_quat) print(ex_quat) print(in_quat) Regards, HY > > the result is now a permutation away from what the website says. > > > I'd test with 'obvious' cases to get to the bottom of that, but I mostly > think one should trust scipy to do "the right thing" as it has had more > scrutiny than the website. > > > Hope this helps > > > Le 27/10/2020 ? 14:19, Hongyi Zhao a ?crit : > > Hi, > > > > I try to convert between Euler angle and Quaternion representations. > > But I obtained different results when using scipy and the online > > converter (https://quaternions.online/). See following for detailed > > info: > > > > The results obtained from scipy: > > > > In [14]: from scipy.spatial.transform import Rotation as R > > > > In [15]: r = R.from_euler('xyz', [90, 45, 30], degrees=True) > > In [17]: r.as_quat() > > Out[17]: array([ 0.56098553, 0.43045933, -0.09229596, 0.70105738]) > > > > The results given by the online converter (https://quaternions.online/): > > > > w: 0.561 > > x: 0.701 > > y: 0.092 > > z: 0.430 > > > > Any hints for this problem will be highly appreciated? > > > > Regards, > > HY > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials, Xingtai Polytechnic College NO. 552 North Gangtie Road, Xingtai, China From charlesr.harris at gmail.com Wed Oct 28 22:12:30 2020 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 28 Oct 2020 20:12:30 -0600 Subject: [SciPy-User] NumPy 1.19.3 release Message-ID: Hi All, On behalf of the NumPy team I am pleased to announce that NumPy 1.19.3 has been released. NumPy 1.19.3 is a small maintenance release with two major improvements: - Python 3.9 binary wheels on all supported platforms, - OpenBLAS fixes for Windows 10 version 2004 fmod bug. This release supports Python 3.6-3.9 and is linked with OpenBLAS 3.12 to avoid some of the fmod problems on Windows 10 version 2004. Microsoft is aware of the problem and users should upgrade when the fix becomes available, the fix here is limited in scope. NumPy Wheels for this release can be downloaded from the PyPI , source archives, release notes, and wheel hashes are available on Github . Linux users will need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 wheels. *Contributors* A total of 8 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Charles Harris - Chris Brown + - Daniel Vanzo + - E. Madison Bray + - Hugo van Kemenade + - Ralf Gommers - Sebastian Berg - @danbeibei + *Pull requests merged* A total of 10 pull requests were merged for this release. - #17298: BLD: set upper versions for build dependencies - #17336: BUG: Set deprecated fields to null in PyArray_InitArrFuncs - #17446: ENH: Warn on unsupported Python 3.10+ - #17450: MAINT: Update test_requirements.txt. - #17522: ENH: Support for the NVIDIA HPC SDK nvfortran compiler - #17568: BUG: Cygwin Workaround for #14787 on affected platforms - #17647: BUG: Fix memory leak of buffer-info cache due to relaxed strides - #17652: MAINT: Backport openblas_support from master. - #17653: TST: Add Python 3.9 to the CI testing on Windows, Mac. - #17660: TST: Simplify source path names in test_extending. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Wed Oct 28 23:34:07 2020 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Wed, 28 Oct 2020 23:34:07 -0400 Subject: [SciPy-User] [SciPy-Dev] NumPy 1.19.3 release In-Reply-To: References: Message-ID: On 10/28/20, Charles R Harris wrote: > Hi All, > > On behalf of the NumPy team I am pleased to announce that NumPy 1.19.3 has > been released. NumPy 1.19.3 is a small maintenance release with two major > improvements: > > - Python 3.9 binary wheels on all supported platforms, > - OpenBLAS fixes for Windows 10 version 2004 fmod bug. > > This release supports Python 3.6-3.9 and is linked with OpenBLAS 3.12 to > avoid some of the fmod problems on Windows 10 version 2004. Microsoft is > aware of the problem and users should upgrade when the fix becomes > available, the fix here is limited in scope. > > NumPy Wheels for this release can be downloaded from the PyPI > , source archives, release notes, > and wheel hashes are available on Github > . Linux users will > need pip >= 0.19.3 in order to install manylinux2010 and manylinux2014 > wheels. > > *Contributors* > > A total of 8 people contributed to this release. People with a "+" by > their > names contributed a patch for the first time. > > > - Charles Harris > - Chris Brown + > - Daniel Vanzo + > - E. Madison Bray + > - Hugo van Kemenade + > - Ralf Gommers > - Sebastian Berg > - @danbeibei + > > > > *Pull requests merged* > A total of 10 pull requests were merged for this release. > > - #17298: BLD: set upper versions for build dependencies > - #17336: BUG: Set deprecated fields to null in PyArray_InitArrFuncs > - #17446: ENH: Warn on unsupported Python 3.10+ > - #17450: MAINT: Update test_requirements.txt. > - #17522: ENH: Support for the NVIDIA HPC SDK nvfortran compiler > - #17568: BUG: Cygwin Workaround for #14787 on affected platforms > - #17647: BUG: Fix memory leak of buffer-info cache due to relaxed > strides > - #17652: MAINT: Backport openblas_support from master. > - #17653: TST: Add Python 3.9 to the CI testing on Windows, Mac. > - #17660: TST: Simplify source path names in test_extending. > > Cheers, > > Charles Harris > Thanks for managing the release, Chuck! Warren From Jerome.Kieffer at esrf.fr Thu Oct 29 03:19:38 2020 From: Jerome.Kieffer at esrf.fr (Jerome Kieffer) Date: Thu, 29 Oct 2020 08:19:38 +0100 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. In-Reply-To: References: Message-ID: <20201029081938.38640997@lintaillefer.esrf.fr> On Wed, 28 Oct 2020 09:27:47 +0100 Guillaume Gay wrote: > I'd test with 'obvious' cases to get to the bottom of that, but I mostly > think one should trust scipy to do "the right thing" as it has had more > scrutiny than the website. One of the reference piece of code I know is the one from C. Gohlke: https://github.com/malcolmreynolds/transformations It support the full zoology of euler angles (there are 24 different conventions) Maybe it will help you to sort out who is doing what. -- J?r?me Kieffer From hongyi.zhao at gmail.com Thu Oct 29 06:17:53 2020 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Thu, 29 Oct 2020 18:17:53 +0800 Subject: [SciPy-User] The different results given by scipy and https://quaternions.online/ when converting Euler angles to Quaternion representation. In-Reply-To: <20201029081938.38640997@lintaillefer.esrf.fr> References: <20201029081938.38640997@lintaillefer.esrf.fr> Message-ID: On Thu, Oct 29, 2020 at 3:22 PM Jerome Kieffer wrote: > > On Wed, 28 Oct 2020 09:27:47 +0100 > Guillaume Gay wrote: > > > I'd test with 'obvious' cases to get to the bottom of that, but I mostly > > think one should trust scipy to do "the right thing" as it has had more > > scrutiny than the website. > > One of the reference piece of code I know is the one from C. Gohlke: > https://github.com/malcolmreynolds/transformations Thank you for the valuable tip. In fact, I've checked the above package and find that it's just a very outdated clone - the recent commit was done on Jul 23, 2014 - of the original package by C. Gohlke located here: https://pypi.org/project/transformations/ And also see the author's website for more information: https://www.lfd.uci.edu/~gohlke/ I'll do some further inspections based this package and communicate with the author if necessary. Thanks again for your helpful notes. Regards, HY > It support the full zoology of euler angles (there are 24 different conventions) > > Maybe it will help you to sort out who is doing what. > -- > J?r?me Kieffer > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials, Xingtai Polytechnic College NO. 552 North Gangtie Road, Xingtai, China From gp459 at cam.ac.uk Thu Oct 29 14:45:46 2020 From: gp459 at cam.ac.uk (Giovanni Pugliese Carratelli) Date: Thu, 29 Oct 2020 18:45:46 +0000 Subject: [SciPy-User] Help understanding multivariate rv_discrete in the multivariate case Message-ID: Hello, I have just been trought the documentation for scipy's rv_discrete function. In particular I am interested in creating a discrete support multivariate rv from values I provide. >From the documentation it appers that it is possible only for a list of values. So I have come up with the code: a = [[1/6, 1/6, 1/6], [1/6, 1/6, 1/6]] #print(np.shape(a)) xks=[[1,2,3],[1,2,3]]; joint = stats.rv_discrete(name='Joint', values=(xks, a)); Is this correct? Moreover, how can I use "expect" now that the rv is multidimensional? I would need for instance to compute the expectation along only of the dimensions, or simply marginalise. I have tried stackexchange to no avail. -------------- next part -------------- An HTML attachment was scrubbed... URL: