Inconsistent results with localtimestamp and current_timestamp
Running XE on Windows XP with the system timezone to GMT rebooted, restarted XE)
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
I'm getting incorrect and inconsistent results with current_timestamp and localtimestamp:
With SQL, localtimestamp computes the wrong offset (appears to use 1987-2006 DST rules):
select
dbtimezone
, sessiontimezone
, current_timestamp
, current_timestamp + numtodsinterval(18,'day') as current_timestamp18
, localtimestamp
from dual;
+00:00
US/Eastern
17-MAR-10 10.27.17.376000000 AM US/EASTERN
04-APR-10 10.27.17.376000000 AM US/EASTERN
17-MAR-10 09.27.17.376000000 AM
however, in PL/SQL, both current_timestamp and localtimestamp return the wrong hour value, and adding 18 to current_timestamp shows it is using 1987-2006 DST rules (1st sunday of april)/ note that this happens in straight PL/SQL and in embedded SQL (same results selecting from tables other than DUAL):
begin
for r1 in (
select
dbtimezone
, sessiontimezone
, current_timestamp
, current_timestamp + numtodsinterval(18,'day') as current_timestamp18
, localtimestamp
from dual
loop
dbms_output.put_line('SQL dbtimezone = ' || r1.dbtimezone);
dbms_output.put_line('SQL sessiontimezone = ' || r1.sessiontimezone);
dbms_output.put_line('SQL current_timestamp = ' || r1.current_timestamp);
dbms_output.put_line('SQL current_timestamp +18 = ' || r1.current_timestamp18);
dbms_output.put_line('SQL localtimestamp = ' || r1.localtimestamp);
end loop;
dbms_output.put_line('dbtimezone = ' || dbtimezone);
dbms_output.put_line('sessiontimezone = ' || sessiontimezone);
dbms_output.put_line('systimestamp = ' || systimestamp);
dbms_output.put_line('current_timestamp = ' || current_timestamp);
dbms_output.put_line('current_timestamp +18 = ' || (current_timestamp + numtodsinterval(18,'day')));
dbms_output.put_line('localtimestamp = ' || localtimestamp);
end;
SQL dbtimezone = +00:00
SQL sessiontimezone = US/Eastern
SQL current_timestamp = 17-MAR-10 09.29.32.784000 AM US/EASTERN
SQL current_timestamp +18 = 04-APR-10 10.29.32.784000000 AM US/EASTERN
SQL localtimestamp = 17-MAR-10 09.29.32.784000 AM
dbtimezone = +00:00
sessiontimezone = US/Eastern
systimestamp = 17-MAR-10 02.29.32.784000000 PM +00:00
current_timestamp = 17-MAR-10 09.29.32.784000000 AM US/EASTERN
current_timestamp +18 = 04-APR-10 10.29.32.784000000 AM US/EASTERN
localtimestamp = 17-MAR-10 09.29.32.784000000 AM
dbtimezone = +00:00
sessiontimezone = US/Eastern
systimestamp = 17-MAR-10 02.16.21.366000000 PM +00:00
current_timestamp = 17-MAR-10 09.16.21.366000000 AM US/EASTERN
current_timestamp +18 = 04-APR-10 10.16.21.366000000 AM US/EASTERN
localtimestamp = 17-MAR-10 09.16.21.366000000 AM
is this a known bug?
is there a patch or a work-around for XE?
are other datasbase versions affected?
Can't patch XE, unfortunately it comes with pre-2007 DST rules.
There is a metalink note describing how to fix the DST changes, and while it's not really a "supported" method, neither is XE- if you can get updated timezone files from a later patch set for the same release, 10gR2, on the right operating system, shutdown/startup the database the updated DST rules will be in place. The timezone files are in $ORACLE_HOME/oracore/zoneinfo.
Another unfortunately, any values already stored in the database using timestamp with local timezone datatypes for the affected period of the DST changes won't be correct, i.e. there is no 2010-03-14 02:01 (?) but with older timezone rules in place that would be a valid timestamp. The data has to be saved before updating the timezone file, and re-translated to timestamp w/local tz datatypes after the update.
IMHO storing literal timezone info isn't an ideal practice, let the client settings do the time interpretation, time is always changing. Its the interpretation of the time that gets changed. From time to time. :(
Similar Messages
-
Inconsistent results with SDO_RELATE and boundary conditions
Hello,
I am using SDO_RELATE to find all points in one table with any interaction with a polygon selected from a second table. Pretty basic stuff. I noticed one point which exactly matches a vertex on the query polygon was not getting selected as expected. Experimenting a bit I found if I embedded the polygon geometry in the query (rather than selecting it from its table) the query selected the point in question! Experimenting further I found if I changed the query relation from ANYINTERACT to TOUCH the point in question was not selected. So my 2 questions are:
1) What would cause this to fail when the query polygon is being selected from the table?
2) How can ANYINTERACT be true but TOUCH be false?
Here is the first query that fails:
SELECT a.point_id
FROM point_table a, poly_table b
WHERE a.point_id = <point which matches poly vertex>
AND b.poly_id = <poly_id>
AND SDO_RELATE (a.geom, b.geom, 'mask=ANYINTERACT querytype=WINDOW') = 'TRUE';
Here is the query that works:
SELECT a.point_id
FROM point_table a, poly_table b
WHERE a.point_id = <point which matches poly vertex>
AND SDO_RELATE (a.geom,
MDSYS.SDO_GEOMETRY(2003, 8265, NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003, 1),
MDSYS.SDO_ORDINATE_ARRAY(-82.414884, 28.0094323,
-82.387158, 28.0116258, -82.378891, 28.0131216,
-82.377988, 28.0133894, -82.37555, 28.0143994,
-82.329352, 28.0661089, -82.313207, 28.1006725,
-82.362246, 28.1261981, -82.445319, 28.1139363,
-82.428389, 28.0245891, -82.422103, 28.0117697,
-82.421382, 28.0109085, -82.419096, 28.0099741,
-82.414884, 28.0094323)),
'mask=ANYINTERACT querytype=WINDOW') = 'TRUE';
Here is the second query that fails (ANYINTERACT -> TOUCH):
SELECT a.point_id
FROM point_table a, poly_table b
WHERE a.point_id = <point which matches poly vertex>
AND SDO_RELATE (a.geom,
MDSYS.SDO_GEOMETRY(1003, 8265, NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1, 2003, 1),
MDSYS.SDO_ORDINATE_ARRAY(-82.414884, 28.0094323,
-82.387158, 28.0116258, -82.378891, 28.0131216,
-82.377988, 28.0133894, -82.37555, 28.0143994,
-82.329352, 28.0661089, -82.313207, 28.1006725,
-82.362246, 28.1261981, -82.445319, 28.1139363,
-82.428389, 28.0245891, -82.422103, 28.0117697,
-82.421382, 28.0109085, -82.419096, 28.0099741,
-82.414884, 28.0094323)),
'mask=TOUCH querytype=WINDOW') = 'TRUE';
The point geometry being selected from the point_table looks like this:
MDSYS.SDO_GEOMETRY(2001, 8265, NULL,
MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),
MDSYS.SDO_ORDINATE_ARRAY(-82.445319, 28.1139363))
The metadata for these 2 tables looks like this:
POINT_TABLE
GEOM
SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05),
SDO_DIM_ELEMENT('Y', -90, 90, .05))
8265
POLY_TABLE
GEOM
SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05),
SDO_DIM_ELEMENT('Y', -90, 90, .05))
8265
Both tables have R-tree indexes built.
System is Oracle9i Enterprise Edition Release 9.2.0.5.0 with sdo_version = 9.2.0.5.0
Is it a problem that my points are stored as SDO_ORDINATES? Is the tolerance a factor? Is geodetic data a factor?
Let me know if any other information would be useful and thank you for your help!
JamesI am continuing to struggle with this one. I gave up on the SDO_RELATE function under the assumption the tolerance does not come into play for this function (is this true?).
Now I am trying to use the SDO_GEOM.RELATE function with a tolerance to make this work and it is not working as I would expect.
The following 2 queries show the distance between this point and this polygon is 0 yet they are disjoint. How can this be??
Thanks, James
SQL> SELECT SDO_GEOM.SDO_DISTANCE(
2 MDSYS.SDO_GEOMETRY(2001, 8265, NULL,
3 MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),
4 MDSYS.SDO_ORDINATE_ARRAY(-82.445319, 28.1139363)),
5 MDSYS.SDO_GEOMETRY(2003, 8265, NULL,
6 MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003, 1),
7 MDSYS.SDO_ORDINATE_ARRAY(-82.414884, 28.0094323, -82.387158, 28.0116258,
8 -82.378891, 28.0131216, -82.377988, 28.0133894, -82.37555, 28.0143994,
9 -82.329352, 28.0661089, -82.313207, 28.1006725, -82.362246, 28.1261981,
10 -82.4453185022412,28.1139363297581, -82.428389, 28.0245891, -82.422103, 28.0117697,
11 -82.421382, 28.0109085, -82.419096, 28.0099741, -82.414884, 28.0094323)),
12 10)
13 FROM DUAL;
SDO_GEOM.SDO_DISTANCE(MDSYS.SDO_GEOMETRY(2001,8265,NULL,MDSYS.SDO_ELEM_INFO_ARRA
.00000000000000000000
SQL>
SQL> SELECT SDO_GEOM.RELATE(
2 MDSYS.SDO_GEOMETRY(2001, 8265, NULL,
3 MDSYS.SDO_ELEM_INFO_ARRAY(1, 1, 1),
4 MDSYS.SDO_ORDINATE_ARRAY(-82.445319, 28.1139363)),
5 'determine',
6 MDSYS.SDO_GEOMETRY(2003, 8265, NULL,
7 MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003, 1),
8 MDSYS.SDO_ORDINATE_ARRAY(-82.414884, 28.0094323, -82.387158, 28.0116258,
9 -82.378891, 28.0131216,-82.377988, 28.0133894, -82.37555, 28.0143994,
10 -82.329352, 28.0661089, -82.313207, 28.1006725,
11 -82.362246, 28.1261981, -82.4453185022412, 28.1139363297581,
12 -82.428389, 28.0245891, -82.422103, 28.0117697,
13 -82.421382, 28.0109085, -82.419096, 28.0099741, -82.414884, 28.0094323)),
14 10)
15 FROM DUAL;
SDO_GEOM.RELATE(MDSYS.SDO_GEOMETRY(2001,8265,NULL,MDSYS.SDO_ELEM_INFO_ARRAY(1,1,
DISJOINT -
Inconsistent results with MDX formula
Hi. I'm converting a BSO cube to ASO, and it has dynamically calculated formulas that I'm converting to MDX. I have a formula that is supposed to accumulate an account (Order Intake) through the months and years until it gets to the current month of the current year (set by substitution variables) and then just carries that balance forward until the end.
This is the formula I wrote in MDX.
IIF( Count( Intersect( {MemberRange([Years].[FY95], [&Auto_CurYr].Lag(1))}, {Years.CurrentMember} ) ) = 1,
IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + ([Contract Value],[Adj],[Years].CurrentMember.PrevMember),
[Order Intake] + ([Contract Value],[Period].CurrentMember.PrevMember)
IIF( CurrentMember ([Years]) = [&Auto_CurYr],
IIF( CurrentMember ([Period]) = [Jan],
[Order Intake] + ([Contract Value],[Adj],[Years].CurrentMember.PrevMember),
IIF( Count( Intersect( {MemberRange([Period].[Feb], [&Auto_CurMoNext_01].Lag(1))}, {Period.CurrentMember} ) ) = 1,
[Order Intake] + ([Contract Value],[Period].CurrentMember.PrevMember),
([Contract Value],[Period].CurrentMember.PrevMember)
([Contract Value],[Adj],[Years].CurrentMember.PrevMember) /*This is the statement that evaluates for months and years after the current month and year*/
The inconsistent results are as follows:
I have a spreadsheet that has the years and months across the top in columns. The substitution variables are set to FY09 for the year and Oct for the month. The formula works fine until it gets to Jan of FY10, at which point it produces a number out of thin air, and carries that incorrect number through to the end.
When I put the years and months into my rows, however, and then drill down on the months, I get different results. Not only different, but different results at different times, too. When I first drilled, all results were correct. Now when I drill, it produces a random number in October of FY09 (not entirely random, but actually double what it's supposed to be), then #missing in Nov of FY09, then the correct number thereafter. Same exact data intersection on both spreadsheets, different results. I've retrieved over and over again, and the only time it might change is if I re-drill. I've used both Essbase Add-in and Smart View with consistently inconsistent results.
Has anyone ever encountered this sort of behavior with an MDX formula?Well, I finally got a formula that works. I did end up using a combination of CASE and IIF, but I never did figure out how to deal with summing up ranges of data correctly, accounting for changing substitution variables, so I had to do a lot of hard coding by month. For instance, I couldn't ask it to sum([Order Intake],[Jan],[&Auto_CurYr]:([Order Intake],[&Auto_CurMo],[&Auto_CurYr]). Although it validated fine, when I tried to retrieve it said members were not of the same generation, presumably because my substitution variable could potentially be a non - level 0 month (it worked if I hard coded the end month). Also, I really don't like the MDX version of @LSIBLINGS and @RSIBLINGS.
But this works.
CASE
When Count( Intersect( {MemberRange([Years].[FY95], [&Auto_CurYr].Lag(1))}, {Years.CurrentMember} ) ) = 1
THEN IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Feb],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Feb]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Mar],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Mar]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Apr],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Apr]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [May],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[May]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jun],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jun]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jul],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jul]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Aug],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Aug]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Sep],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Sep]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Oct],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Oct]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Nov],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Nov]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Dec]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember))
When CurrentMember ([Years]) IS [&Auto_CurYr]
THEN IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Feb] AND CONTAINS([Feb], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Feb]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Mar] AND CONTAINS([Mar], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Mar]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Apr] AND CONTAINS([Apr], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Apr]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [May] AND CONTAINS([May], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[May]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jun] AND CONTAINS([Jun], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jun]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jul] AND CONTAINS([Jul], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jul]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Aug] AND CONTAINS([Aug], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Aug]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Sep] AND CONTAINS([Sep], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Sep]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Oct] AND CONTAINS([Oct], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Oct]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Nov] AND CONTAINS([Nov], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Nov]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[&Auto_CurMo]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember))
WHEN CONTAINS([Years].CurrentMember, {MemberRange([&Auto_CurYr].Lead(1), [Years].[FY15])})
THEN ([Contract Value],[Adj],[Years].&Auto_CurYr)
END
Thanks for looking at it, Gary, I appreciate it.
Sabrina
Edited by: SabrinaD on Nov 18, 2009 2:29 PM
Edited by: SabrinaD on Nov 18, 2009 2:31 PM
Edited by: SabrinaD on Nov 18, 2009 2:34 PM
Edited by: SabrinaD on Nov 18, 2009 2:35 PM -
Inconsistent results with ANSI LEFT JOIN on 9iR2
Is this a known issue? Is it solved in 10g?
With the following data setup, I get inconsistent results. It seems to be linked to the combination of using LEFT JOIN with the NULL comparison within the JOIN.
create table titles (title_id int, title varchar(50));
insert into titles values (1, 'Red Book');
insert into titles values (2, 'Yellow Book');
insert into titles values (3, 'Blue Book');
insert into titles values (4, 'Orange Book');
create table sales (stor_id int, title_id int, qty int, email varchar(60));
insert into sales values (1, 1, 1, '[email protected]'));
insert into sales values (1, 2, 1, '[email protected]');
insert into sales values (3, 3, 4, null);
insert into sales values (3, 4, 5, '[email protected]');
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND stor_id = 3
5 AND sales.email is not null
6 ;
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
SQL>
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND 3 = stor_id
5 AND sales.email is not null;
TITLE_ID TITLE QTY
2 Yellow Book 1
4 Orange Book 5
3 Blue Book
1 Red Book
It seems to matter what order I specify the operands stor_id = 3, or 3 = stor_id.
In the older (+) environment, I would understand this, but here? I'm pretty sure most other databases don't care about the order.
thanks for your insight
KevinDon't have a 9i around right now to test ... but in 10 ...
SQL> create table titles (title_id int, title varchar(50));
Â
Table created.
Â
SQL> insert into titles values (1, 'Red Book');
Â
1 row created.
Â
SQL> insert into titles values (2, 'Yellow Book');
Â
1 row created.
Â
SQL> insert into titles values (3, 'Blue Book');
Â
1 row created.
Â
SQL> insert into titles values (4, 'Orange Book');
Â
1 row created.
Â
SQL> create table sales (stor_id int, title_id int, qty int, email varchar(60));
Â
Table created.
Â
SQL> insert into sales values (1, 1, 1, '[email protected]');
Â
1 row created.
Â
SQL> insert into sales values (1, 2, 1, '[email protected]');
Â
1 row created.
Â
SQL> insert into sales values (3, 3, 4, null);
Â
1 row created.
Â
SQL> insert into sales values (3, 4, 5, '[email protected]');
Â
1 row created.
Â
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND stor_id = 3
5 AND sales.email is not null
6 ;
Â
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
Â
SQL>
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND 3 = stor_id
5 AND sales.email is not null;
Â
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options -
Problems in query result with infoset and timedep infoobject
Hi,
I have the following situation:
infoobject ZEMPLOYEE timedep
Infocube 0C0_CCA_C11 (standard cost center/cost element postings)
-> infoset with infoobject and infocube linked with outer join
My query should show all active employees in one month without any posting in the infocube.
My testdata looks like this:
pernr date from date to cost center
4711 01.01.1000 31.12.2002
4711 01.01.2003 31.01.2009 400000
4711 01.02.2009 31.12.9999
That means the employee is only active between 01.01.2003 and 31.01.2009.
I expect the following result in the query with key-date 31.01.2009:
4711 01.01.2003 31.01.2009 400000
I expect the following result in the query with key-date 01.02.2009:
no result
-> because the employee is not active anymore, I don't want to see him in the query.
My query delivers the following result:
4711 01.02.2009 31.12.9999
The first and the last entry in master data is automatically created by the system.
I tried to exclude the not active employees by selection over cost center in the filter (like cost center between 1 and 9999999, or exclude cost center #). But unfortunately the filter selection does not work, because obviously the attributes are not filled in the last entry.
Is there anyone who can tell me how I can exclude the last entry in the master data in the query?
Any help is much appreciated! Points will be assigned!
best regards
ChrisHI,
problem is that I can't use employe status in this case, beacuse for any reason the people don't use it.
I have also tried with exceptions and conditions, but the attributes ar enot filled, so it seems that nothing works.
Do you have any other suggestions?
Thanks!
best tregards
Chris -
Inconsistent results between SDO_RELATE and SDO_GEOM.RELATE
Maybe its the Friday syndrome, but I'm getting some results that I can't get my head around...
Let say I have a table with a single line geometry...
CREATE TABLE BUFFER_TEST (
WHAT VARCHAR2(100),
GEOMETRY SDO_GEOMETRY);
INSERT INTO user_sdo_geom_metadata VALUES ('BUFFER_TEST','GEOMETRY',
MDSYS.SDO_DIM_ARRAY(
MDSYS.SDO_DIM_ELEMENT('X',400000,750000,0.0005),
MDSYS.SDO_DIM_ELEMENT('Y',500000,1000000,0.0005)),
262152);
CREATE INDEX BUFFER_TEST_IDX ON BUFFER_TEST (GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
INSERT INTO BUFFER_TEST (what, geometry)
VALUES ('line',
SDO_GEOMETRY(2002, 262152, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1),
SDO_ORDINATE_ARRAY(713353.165, 736165.812, 713353.449, 736165.402, 713353.849,
736164.203, 713353.85, 736162.252, 713353.087, 736149.092)));
COMMIT;Now I want to buffer this line and check if the line is inside the buffer - the actual business need is to see if other lines are in the buffer, but we'll keep it simple for now...
So in the snippet below, I'm buffering the line by 50cm and then using SDO_INSIDE to see if the line is inside the buffer - it isn't.
Then if I use SDO_GEOM.RELATE to determine the relationship, it says INSIDE, which is correct.
Then if I increase the buffer size to 1m, then SDO_INSIDE and SDO_GEOM.RELATE both return the correct result.
SQL> DECLARE
2 l_inside NUMBER;
3 l_small_buffer SDO_GEOMETRY;
4 l_determine VARCHAR2(100);
5 l_buffer_size NUMBER := 0.5;
6 BEGIN
7
8 SELECT SDO_GEOM.SDO_BUFFER(b.geometry, usgm.diminfo, l_buffer_size)
9 INTO l_small_buffer
10 FROM user_sdo_geom_metadata usgm, BUFFER_TEST b
11 WHERE usgm.table_name = 'BUFFER_TEST'
12 AND usgm.column_name = 'GEOMETRY'
13 AND b.what = 'line';
14
15 SELECT COUNT(*)
16 INTO l_inside
17 FROM BUFFER_TEST
18 WHERE SDO_INSIDE(geometry, l_small_buffer) = 'TRUE'
19 AND what = 'line';
20
21 SELECT SDO_GEOM.RELATE(geometry, 'determine', l_small_buffer, 0.0005) relationship
22 INTO l_determine
23 FROM BUFFER_TEST
24 WHERE what = 'line';
25
26 DBMS_OUTPUT.PUT_LINE('l_inside: ' || l_inside || ' relationship ' || l_determine);
27
28 END;
29 /
l_inside: 0 relationship INSIDEAny help would be much appreciated... I'm starting to pull my hair out on this.
This is on Oracle 10.2.0.3I can reproduce this on 11.1.0.6 on Windows 32bit.
Would you recommend I open a support case on this? Do you think it would be possible to backport a fix to 10.2.0.4? -
Inconsistent results with "Apply" commit
Ever since using PPR event to filter MessageChoice results, I've been experiencing lag when trying to commit data. But the lag is only every other time I commit data!
Here are execution times and when the confirmation page appears (in order):
1. 10 sec
2. 1.5 min
3. 12 sec
4. 1.75 min
5. 8 sec
6. 1.6 min
7. 7 sec
8. 1.65 min
9. 6 sec
Why is it that every other execution is an acceptable execution time and the others are not. No code was change between each execution time. Anyone else experience this?
Thanks,
-ScottCan't patch XE, unfortunately it comes with pre-2007 DST rules.
There is a metalink note describing how to fix the DST changes, and while it's not really a "supported" method, neither is XE- if you can get updated timezone files from a later patch set for the same release, 10gR2, on the right operating system, shutdown/startup the database the updated DST rules will be in place. The timezone files are in $ORACLE_HOME/oracore/zoneinfo.
Another unfortunately, any values already stored in the database using timestamp with local timezone datatypes for the affected period of the DST changes won't be correct, i.e. there is no 2010-03-14 02:01 (?) but with older timezone rules in place that would be a valid timestamp. The data has to be saved before updating the timezone file, and re-translated to timestamp w/local tz datatypes after the update.
IMHO storing literal timezone info isn't an ideal practice, let the client settings do the time interpretation, time is always changing. Its the interpretation of the time that gets changed. From time to time. :( -
Inconsistent results from AlphaComposite and AffineTransform
I'm developing a game using Java, and I've run into a fairly major issue. On my primary development machine, I can use combinations of AlphaComposite.SRC_IN and AffineTransforms with rotation just fine (I'm using the composite to create a lighting overlay for the game by drawing light beams onto the alpha channel of an all-black image buffer). On my other computer, while the image itself rotates and draws on the alpha channel fine, the computer also completely clears a bounding rectangle around the rotated image.
The first computer (the one for which this works) has a GeForce 8800GTS video card, while the second has an older Radeon 9700. The first computer also has the most recent version of the JDK (6). Could either of these factors make the difference?
edit: Upgrading to JDK 6 fixed it. Will need to include a notice about upgrading to JDK 6 with the game, probably.
Message was edited by:
HyoukoHi
Yes I have a debug line that prints them out in the doAutoLoadBalancing method - this is the point where I am diagnosing the problem. It only prints the correct results after I have already run the procedure from the SQL command line.
public UserNumber[] doAutoLoadBalancing () throws SQLException {
autoLBNumbersNeeded.clearParameters();
autoLBNumbersNeeded.execute();
mwFF = autoLBNumbersNeeded.getLong(1);
mwHF = autoLBNumbersNeeded.getLong(2);
wmFF = autoLBNumbersNeeded.getLong(3);
wmHF = autoLBNumbersNeeded.getLong(4);
autoLBNumbersNeeded.clearParameters();
if (m_cat.isDebugEnabled()) {m_cat.debug("mwFF: "+mwFF+"; mwHF: " +mwHF+"; wmFF: "+wmFF+"; wmHF: "+wmHF);}
</pre> -
Epson print drivers - should there be difference results with OSX and XP
Since switching to a Mac a couple of years ago, the photo print output from my venerable Epson Stylus Photo RX700 seems to have become poor, with a slight greenish hue to the photos and a lack of colour crispness. I swapped the printer back to my even more venerable Dell XP laptop and the photo print quality is definitely better than with the Mac. I have installed all the printer updates as they occur in the App Store after selecting "Software Update". Even though the printer driver is obtained via "Software Update", I understand that it originates from Epson, or does it? BTW, for my model of Epson, the only way to install the print driver is via this route. Should the two drivers (OSX and Windows) result in a different quality of output? The rather large number of posts in this forum regarding Epson drivers does suggest an inherent issue but I am reluctant to buy a new printer as I have a stock of cartridges for my Epson model and they cost a small fortune.
I haven't observed the large number of posts?
I have three Epsons of that era, and the drivers seem to work OK for me.
I suggest, for troubleshooting, you install the alternate drivers "Gutenprint" and re-add the printer. You might find you like it, too, because of all the color control it gives you.
http://gimp-print.sourceforge.net/MacOSX.php -
Inconsistent results with Windows App Certification Kit
Hi, I asked about this issue before on a different forum but didn't get a response. Support pointed me towards this forum so I'm asking here as maybe the other forum wasn't the appropriate one. My company is attempting to renew our ms partnership, and we are
trying to certify our desktop
app for windows 8. I have attached a screen shot below showing the registry entries that our app makes. When we try to test for certification (windows 8) we get the different (but similar) results on different systems using the exact same installation package.
Here is an example of the results we get:
Results:
An optional value 'VersionMajor' is missing or invalid for program Chiro8000.
An optional value 'VersionMinor' is missing or invalid for program Chiro8000.
An optional value 'MajorVersion' is missing or invalid for program Chiro8000.
An optional value 'MinorVersion' is missing or invalid for program Chiro8000.
A non-optional value 'DisplayName' is missing or invalid for program Chiro8000.
A non-optional value 'Publisher' is missing or invalid for program Chiro8000.
A non-optional value 'ProductVersion' is missing or invalid for program Chiro8000.
An optional value 'InstallLocation' is missing or invalid for program Chiro8000.
Can anyone point us in the right direction?How did you publish this package? Here are some similar questions, maybe caused by installer:
http://stackoverflow.com/questions/21182856/windows-app-certification-kit-test-result-app-didnt-create-the-require-regist
http://www.advancedinstaller.com/forums/viewtopic.php?f=2&t=21368
http://www.advancedinstaller.com/forums/viewtopic.php?f=2&t=29782
https://social.msdn.microsoft.com/Forums/vstudio/en-US/e61c393d-e5af-4fee-ad8b-29c36e889043/windows-app-certification-kit-tests-fail-for-desktop-app-installed-from-msi?forum=windowscompatibility
Best Regards,
Please remember to mark the replies as answers if they help -
Inconsistent Results with Email Export?
I've set up my presets so that when I export to Mail via the Mail icon my images should be reduced in size/res and watermarks should be added. This is working fine on direct exports but not when export to mail? My images aren't being resized or watermarked?
Here's the way I have it setup: Aperture/Presets/Image Export/Email Medium-JPEG/Show Watermark is checked, as is Scale Watermark. I've also created a custom preset which is basically identical but still no dice. As I said Export works fine when exporting to a folder. Is there another way to set this up? Am I missing something? Anyone else having this issue?Can't patch XE, unfortunately it comes with pre-2007 DST rules.
There is a metalink note describing how to fix the DST changes, and while it's not really a "supported" method, neither is XE- if you can get updated timezone files from a later patch set for the same release, 10gR2, on the right operating system, shutdown/startup the database the updated DST rules will be in place. The timezone files are in $ORACLE_HOME/oracore/zoneinfo.
Another unfortunately, any values already stored in the database using timestamp with local timezone datatypes for the affected period of the DST changes won't be correct, i.e. there is no 2010-03-14 02:01 (?) but with older timezone rules in place that would be a valid timestamp. The data has to be saved before updating the timezone file, and re-translated to timestamp w/local tz datatypes after the update.
IMHO storing literal timezone info isn't an ideal practice, let the client settings do the time interpretation, time is always changing. Its the interpretation of the time that gets changed. From time to time. :( -
Inconsistent Results with Flash Using Different Monitors
Hi Folks,
I hope this is the correct forum for this question. If not, let me know where to post it.
We do a lot of flash on DotNetNuke web sites. Strange combination? Maybe. At any rate we have had reasonable success but recently ran into a strange issue. Maybe it isn’t so strange for graphics gurus who really know a lot about things like aspect ratios, display resolution and various types of monitors but it was for us. We finally fixed our problem but not to my satisfaction because I don’t fully understand why what we did worked. Maybe someone on this forum more expert than me can provide some knowledge. I would really appreciate it. I always like to increase my knowledge base.
We had three Flash movie modules on a web page displayed via three DNN media modules. We do that all the time. However this time the center one was skewed or pushed over several inches to the right on a customer laptop and on one of our machines running Vista. It appeared to be specific to IE 8 but we didn’t do a lot of testing on other browsers. Every other machine we had looked OK. I had other folks look at the site but no one had the skewing issue.
From the testing we did do the issues appear to break down into two problems. The skewing on our Vista machine was corrected by reinstalling IE 8. This I guess was do to some installation problem when IE 8 was first installed.
Our customer however still had this problem with Win XP even after reinstalling IE 8 and he claimed the same problem existed on a new Win 7 machine with IE 8. We managed to get a laptop running Win XP and IE 8 with a monitor that we thought was like the one the customer was using. Sure enough we got the skewing problem. Reinstalling IE 8 made no difference as it did with our Vista machine. We then played with the actual size (height and width) of the Flash movie modules and the DNN media modules by trial and error until the skewing disappeared on the laptop. Everything now displays OK on all monitors that we have tested including all the customer’s machines. This appears to have something to do with the aspect ratio and the Flash player. I don’t believe in magic. There has to be a mathematical technical explanation for this but I don’t have the expertise to know exactly what it is.
Any help would be much appreciated. If there are any books, documents, or other material that will eliminate my ignorance please indicate what they are.
Thanks,
G. M.Can't patch XE, unfortunately it comes with pre-2007 DST rules.
There is a metalink note describing how to fix the DST changes, and while it's not really a "supported" method, neither is XE- if you can get updated timezone files from a later patch set for the same release, 10gR2, on the right operating system, shutdown/startup the database the updated DST rules will be in place. The timezone files are in $ORACLE_HOME/oracore/zoneinfo.
Another unfortunately, any values already stored in the database using timestamp with local timezone datatypes for the affected period of the DST changes won't be correct, i.e. there is no 2010-03-14 02:01 (?) but with older timezone rules in place that would be a valid timestamp. The data has to be saved before updating the timezone file, and re-translated to timestamp w/local tz datatypes after the update.
IMHO storing literal timezone info isn't an ideal practice, let the client settings do the time interpretation, time is always changing. Its the interpretation of the time that gets changed. From time to time. :( -
[Solved]Getting different results with PKGBUILD and compiling manually
Hi,
When I compile this one package manually and install it, it works 100% however when I try it in a PKGBUILD the compile fails.
Here is the PKGBUILD I have so far (still working on it)
pkgname=smtp-gated
pkgver=1.4.16.2
pkgrel=1
pkgdesc="This software block SMTP sessions used by e-mail worms and viruses on the NA(P)T router. It acts like proxy, intercepting outgoing SMTP connections and scanning session data on-the-fly. When messages is infected, the SMTP session is terminated. It's to be used (mostly) by ISPs, so they can eliminate infected hosts from their network, and (preferably) educate their users."
url="http://smtp-proxy.klolik.org/"
license="GNU"
arch=('i686' 'x86_64')
#depends=('')
#install=smtp-gated.install
source=("$url/files/$pkgname-$pkgver.tar.gz")
md5sums=('3857d03c847efd89b052acaeffaa453b')
build() {
cd $startdir/src/$pkgname-$pkgver || return 1
msg CONFIGURE
#./configure --prefix=/usr || return 1
./configure || return 1
msg MAKE
make || return 1
msg INSTALL
make install INSTALL_ROOT=$startdir/pkg/ || return 1
And the compile error when I run "makepkg"
==> MAKE
make all-recursive
make[1]: Entering directory `/root/ABS_snmp-gated/src/smtp-gated-1.4.16.2'
Making all in src
make[2]: Entering directory `/root/ABS_snmp-gated/src/smtp-gated-1.4.16.2/src'
if gcc -DHAVE_CONFIG_H -I. -I. -I.. -DMD5_TEST -march=x86-64 -mtune=generic -O2 -pipe -Wall -MT md5_test-md5.o -MD -MP -MF ".deps/md5_test-md5.Tpo" -c -o md5_test-md5.o `test -f 'md5.c' || echo './'`md5.c; \
then mv -f ".deps/md5_test-md5.Tpo" ".deps/md5_test-md5.Po"; else rm -f ".deps/md5_test-md5.Tpo"; exit 1; fi
gcc -march=x86-64 -mtune=generic -O2 -pipe -Wall -Wl,--hash-style=gnu -Wl,--as-needed -o md5-test md5_test-md5.o
if gcc -DHAVE_CONFIG_H -I. -I. -I.. -march=x86-64 -mtune=generic -O2 -pipe -Wall -MT regex-test.o -MD -MP -MF ".deps/regex-test.Tpo" -c -o regex-test.o regex-test.c; \
then mv -f ".deps/regex-test.Tpo" ".deps/regex-test.Po"; else rm -f ".deps/regex-test.Tpo"; exit 1; fi
gcc -march=x86-64 -mtune=generic -O2 -pipe -Wall -Wl,--hash-style=gnu -Wl,--as-needed -o regex-test -lpcre regex-test.o
regex-test.o: In function `main':
regex-test.c:(.text+0xc): undefined reference to `pcre_version'
regex-test.c:(.text+0x3c): undefined reference to `pcre_compile'
regex-test.c:(.text+0x71): undefined reference to `pcre_exec'
regex-test.c:(.text+0x88): undefined reference to `pcre_free'
collect2: ld returned 1 exit status
make[2]: *** [regex-test] Error 1
make[2]: Leaving directory `/root/ABS_snmp-gated/src/smtp-gated-1.4.16.2/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/ABS_snmp-gated/src/smtp-gated-1.4.16.2'
make: *** [all] Error 2
Any pointers or help would be greatly appreciated.
Last edited by Tinuva (2010-01-27 11:55:41)Alright not sure if this is up to standard to go into AUR but this is what I have for now:
PKGBUILD:
pkgname=smtp-gated
pkgver=1.4.16.2
pkgrel=1
pkgdesc="This software block SMTP sessions used by e-mail worms and viruses on the NA(P)T router. It acts like proxy, intercepting outgoing SMTP connections and scanning session data on-the-fly. When messages is infected, the SMTP session is terminated. It's to be used (mostly) by ISPs, so they can eliminate infected hosts from their network, and (preferably) educate their users."
url="http://smtp-proxy.klolik.org/"
license="GNU"
arch=('i686' 'x86_64')
#depends=('')
install=smtp-gated.install
source=("$url/files/$pkgname-$pkgver.tar.gz")
md5sums=('3857d03c847efd89b052acaeffaa453b')
build() {
cd $startdir/src/$pkgname-$pkgver || return 1
msg CONFIGURE
export LDFLAGS="${LDFLAGS//-Wl,--as-needed}"
./configure --prefix=/usr || return 1
msg MAKE
make || return 1
msg INSTALL
make install INSTALL_ROOT=$startdir/pkg/ || return 1
install -D -m755 ../../smtp-gated ${startdir}/pkg/etc/rc.d/smtp-gated || return 1
install -D -m644 ../../smtp-gated.conf ${startdir}/pkg/etc/smtp-gated.conf || return 1
smtp-gated.install
# arg 1: the new package version
post_install() {
mkdir /var/run/smtp-gated
mkdir /var/spool/smtp-gated
mkdir /var/spool/smtp-gated/msg
chown mail.mail /var/run/smtp-gated
chown mail.mail /var/spool/smtp-gated -R
echo "
SMTP-GATED Instructions:
After installing SMTP-GATED you need to edit smtp-gated.ini
Good luck!
/bin/true
op=$1
shift
$op $*
smtp-gated that goes into /etc/rc.d/
#!/bin/bash
. /etc/rc.conf
. /etc/rc.d/functions
# source application-specific settings
[ -f /etc/conf.d/$NAME ] && . /etc/conf.d/$NAME
# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/usr/sbin:/usr/bin:/sbin:/bin
DESC="SMTP Proxy"
NAME=smtp-gated
DAEMON=/usr/sbin/$NAME
CONFIG=/etc/$NAME.conf
DAEMON_ARGS="$CONFIG"
PID=/var/run/$NAME/$NAME.pid
SCRIPTNAME=/etc/rc.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
# Read configuration variable file if it is present
[ -f /etc/conf.d/$NAME ] && . /etc/conf.d/$NAME
case "$1" in
start)
stat_busy "Starting $NAME"
#[ -z "$PID" ] && $DAEMON $DAEMON_ARGS
$DAEMON $DAEMON_ARGS &>/dev/null
if [ $? -gt 0 ]; then
stat_fail
else
add_daemon $NAME
stat_done
fi
stop)
stat_busy "Stopping $NAME"
#[ -n "$PID" ] && kill $PID &> /dev/null
smtp-gated -K &> /dev/null
if [ $? -gt 0 ]; then
stat_fail
else
rm_daemon $NAME
stat_done
fi
restart)
$0 stop
# will not start if not fully stopped, so sleep
sleep 2
$0 start
echo "usage: $0 {start|stop|restart}"
esac
smtp-gated.conf that goes into /etc
# Virus scanning: yes
# SPAM scanning: yes
#proxy_name smtp-proxy.mydomain.com
port 9199
; bind_address 192.168.1.254
; source_addr 0.0.0.0
mode netfilter
; action_script /etc/smtp-gated-action.sh
lock_duration 1800
lock_path /var/spool/smtp-gated/lock
spool_path /var/spool/smtp-gated/msg
spool_perm 0660
pidfile /var/run/smtp-gated/smtp-gated.pid
;dumpfile /var/run/smtp-gated/smtp-state-dump
set_user mail
priority 5
lock_on virus,spam,maxhost
max_connections 64
max_per_host 10
;max_per_ident 6
;max_load 3.0
ignore_errors yes
spool_leave_on error,spam
nat_header_type ip-only
; abuse [email protected]
log_helo yes
log_mail_from accepted,rejected
log_rcpt_to accepted,rejected
; locale pl_PL
;scan_max_size 1048576
;spam_max_size 0
spam_max_size 131072
;spam_max_load 0.5
spam_threshold 5.0
; scanner_path
antivirus_type clamd
antivirus_path /var/lib/clamav/clamd.sock
antispam_type spamassassin
antispam_path /var/run/spamd.sock
Last edited by Tinuva (2010-01-27 12:13:03) -
Inconsistent results using the "places" view on the iPad with geotagged photos
I am getting inconsistent results with geotagged photos on the iPad (both 1 and 2) OS 5.01. It seems that sometime photos will show up in the correct location on the places view others will not ( they will not show up at all). In fact I can have a geotagged photo in one folder, synch with iTunes and it will not show up in the places view and then move that same picture to a different folder (which becomes an album on the iPad) re-synch and it will now show up at the correct location in the places view. All photos show up in the album view. Everything is the latest version iTunes, iPad etc. I can also have many photos geotagged exactly to the same location (exact same coordinates) some will show up, others will not.
I have used different methods to geotag my photos. At first I was using Photoshop Elements 8 until they stopped supporting geotagging photos. Now I am using a third party application GeoSetter (excellent free application). The GPS data is clearly in the Exif data but the results are not consistent.
Any ideas?I believe I have solved this problem myself and thought others might benefit from what I found. It turns out to be what I believe as a glitch in iTunes. iTunes creates a cache of all the photos that you synch with your iPad, iPhone etc. Apparently it does not update that cache when you change the photo. At least not the Geotagging data. The file modify date was changing but that wasn't enough to cause iTunes to update the cache of the photos. I deleted the cache of all the photos and re-synched and presto now all my photos where showing up at the correct location in the iPad "places" view.
I might add that I called Apple tech support and was routed to a higher tier support person and was very disappointed with his response. He basically told me that the "place" view in the iPad and iPhone was designed to be only used with an Apple computer and Apple software and was surprised the places view was working at all with pictures synched with a Windows based machine. He also said he was unable to help me since he couldn't provide support for non-Apple products. Although I understand this to a point I think he was too quick to make this decision. The solution after all was a simple one and one I think he should have known. I would think this cache problem would happen whether I was using Apple products as well. I view this as a bug in iTunes. Apple needs to compare the modified date of the file and update the cache if it has changed. -
Running report and get the report result with coding
Hi all,
In our R/3 system, there is a custom sales report.
My question is: is there possibility to get data by running this report and grab it the result with code and store it in internal table?
Sorry if my question too basic because I am not abaper
I am just wondering to find new solution for my project.
Regards,
StephMy requirement is: I want to get the result from this report
(rather than try to get the data from SAP original table, because this report is very complicated with a lot of selection data) and use it this result into my new program.
The mechanism that I want is pull the result from the current report, not to add some code in current report to push into new program, to avoid changed the report.
Btw, the output of this report not only the excel file, we can also run this report on foreground mode and see the result.
The report is not ALV report.
Regards,
Steph
Maybe you are looking for
-
I am using elements 11, windows live mail 2012 on a windows 8.1 PC. When trying to attach an elements 11 image to an email via the sharing option in Organizer, I get a message that the email message cannot be completed, and to try Adobe mail. I hav
-
File help required / novice user
I really need some help, pls! I am a novice user, having inherited a website and have tried to learn Dreamweaver by myself. Lots of trial and error! Spent the day yesterday downloading a trial Adobe version 9 so that I could downsize pdf files, as we
-
Solaris 11 ssh ControlMaster support?
I am trying to run use "net::openssh" perl script, which is using ssh multiplexing i.e. ControlMaster, and I am getting an error illegal option -- M..., is there any way to turn on multiplexing on Solaris 11 in the ssh client, or we are forcd to com
-
hi all , please let me know that ,can we use write statement to display top of page in ALV.if not suggest alternative. thanks in advance. janardhan.
-
Error when creating a DTR track.
Hi I try to create a new DTR track in CMS. The software component assign to this track is a custom one with standard SAP component as prerequisite. When I try to save the track I receive the following error message. " Illegal variant mapping. Varian