Distinct Count doesn't return the expected results
Hi All,
I was fighting a little trying to implement a Distinct Count measure over an account dimension in my cube. I read a couple of posts relateed to that and I followed the steps posted by the experts.
I could process the cube but the results I'm getting are not correct. The cube is returning a higher value compared to the correct one calculated directly from the fact table.
Here are the details:
Query of my fact table:
select distinct cxd_account_id,
contactable_email_flag,
case when recency_date>current_date-365 then '0-12' else '13-24' end RECENCY_DATE_ROLLUP,
1 QTY_ACCNT
from cx_bi_reporting.cxd_contacts
where cxd_account_id<>-1 and recency_date >current_date-730;
I have the following dimensions:
Account (with 3 different hierarchies)
Contactable Email Flag (Just 3 values, Y, N, Unknown)
Recency_date (Just dimension members)
All dimensions are sparse and the cube is a compressed one. I defined "MAXIMUM" as aggregate for Contactable Email flag and Recency date and at the end, SUM over Account.
I saw that there is a patch to fix an issue when different aggregation rules are implemented in a compressed cube and I asked the DBA folks to apply it. They told me that the patch cannot be applied because we have an advanced version already installed (Patch 11.2.0.1 ).
These are the details of what we have installed:
OLAP Analytic Workspace 11.2.0.3.0 VALID
Oracle OLAP API 11.2.0.3.0 VALID
OLAP Catalog 11.2.0.3.0 VALID
Is there any other patch that needs to be applied to fix this issue? Or it's already included in the version we have installed (11.2.0.3.0)?
Is there something wrong in the definition of my fact table and that's why I'm not getting the right results?
Any help will be really appreciated!
Thanks in advance,
Martín
Not sure I would have designed the dimensions /cubes as you, but there is another method you can obtain distinct counts.
Basically relies on using basic OLAP DML Expression language and can be put in a Calculated Measure, or can create two Calculated measures
to contain each specific result. I use this method to calculate distinct counts when I want to calculate averages, etc ...
IF account_id ne -1 and (recency_date GT today-365) THEN -
CONVERT(NUMLINES(UNIQUELINES(CHARLIST(Recency_date))) INTEGER)-
ELSE IF account_id ne -1 and (recency_date GT today-730 and recency_date LE today-365) THEN -
CONVERT(NUMLINES(UNIQUELINES(CHARLIST(Recency_date))) INTEGER)-
ELSE NA
This exact code may not work in your case, but think you can get the gist of the process involved.
This assumes the aggregation operators are set to the default (Sum), but may work with how you have them set.
Regards,
Michael Cooper
Similar Messages
-
The same selection doesn't return the same result in 2 identical systems
Hi,
After a copy we have 2 systems with same data.
In a program the same selection doesn't returns the same result, it seems that the sort is different.
I have checked indexes but they are ok in both systems.
Here is the selection:
SELECT but000~partner
INTO TABLE lt_partner
*UP TO 50 ROWS
FROM but000
JOIN but020
ON but020partner = but000partner
JOIN adrc
ON adrcaddrnumber = but020addrnumber
JOIN but100
ON but100partner = but000partner
WHERE but020~addr_valid_from LE '20100727000000'
AND but020~addr_valid_to GE '20100727000000'.
Is there a customizing point to define how the database must be sorted ?
Thanks.
Edited by: julien schneerberger on Jul 27, 2010 4:18 PMHi,
Thank you for your answer.
Result is the same and I don't sort data after the selection.
Do you think the system copy has "broke" the records order in database ?
If it is the case it will be impossible to reproduce the same sort, is it right ?
Edited by: julien schneerberger on Jul 27, 2010 5:19 PM
Edited by: julien schneerberger on Jul 27, 2010 5:20 PM -
Unable to get the expected results with connection pooling
Hi All,
I have been trying to create JDBC connection pooling provided by the Apache Tomcat 4.0 with MySQL 4.0.16-nt at http://jakarta.apache.org/tomcat/tomcat-4.1-doc/jndi-datasource-examples-howto.html and my configuration is as follows
server.xml
<Context path="/DBTest" docBase="DBTest"
debug="5" reloadable="true" crossContext="true">
<Logger className="org.apache.catalina.logger.FileLogger"
prefix="localhost_DBTest_log." suffix=".txt"
timestamp="true"/>
<Resource name="jdbc/TestDB"
auth="Container"
type="javax.sql.DataSource"/>
<ResourceParams name="jdbc/TestDB">
<parameter>
<name>factory</name>
<value>org.apache.commons.dbcp.BasicDataSourceFactory</value>
</parameter>
<!-- Maximum number of dB connections in pool. Make sure you
configure your mysqld max_connections large enough to handle
all of your db connections. Set to 0 for no limit.
-->
<parameter>
<name>maxActive</name>
<value>100</value>
</parameter>
<!-- Maximum number of idle dB connections to retain in pool.
Set to 0 for no limit.
-->
<parameter>
<name>maxIdle</name>
<value>30</value>
</parameter>
<!-- Maximum time to wait for a dB connection to become available
in ms, in this example 10 seconds. An Exception is thrown if
this timeout is exceeded. Set to -1 to wait indefinitely.
-->
<parameter>
<name>maxWait</name>
<value>10000</value>
</parameter>
<!-- MySQL dB username and password for dB connections -->
<parameter>
<name>username</name>
<value>javauser</value>
</parameter>
<parameter>
<name>password</name>
<value>javadude</value>
</parameter>
<!-- Class name for mm.mysql JDBC driver -->
<parameter>
<name>driverClassName</name>
<value>org.gjt.mm.mysql.Driver</value>
</parameter>
<!-- The JDBC connection url for connecting to your MySQL dB.
The autoReconnect=true argument to the url makes sure that the
mm.mysql JDBC Driver will automatically reconnect if mysqld closed the
connection. mysqld by default closes idle connections after 8 hours.
-->
<parameter>
<name>url</name>
<value>jdbc:mysql://localhost:3306/javatest?autoReconnect=true</value>
</parameter>
</ResourceParams>
</Context>
web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app PUBLIC
"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
<description>MySQL Test App</description>
<resource-ref>
<description>DB Connection</description>
<res-ref-name>jdbc/TestDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Container</res-auth>
</resource-ref>
</web-app>
test.jsp
<jsp:useBean id="foo" class="foo.DBTest" scope="page" />
<html>
<head>
<title>DB Test</title>
</head>
<body>
<%
foo.DBTest tst = new foo.DBTest();
tst.init();
%>
<h2>Results</h2>
Foo <%= tst.getFoo() %>
Bar <%= tst.getBar() %>
</body>
</html>
DBTest.java package
package foo;
import javax.naming.*;
import javax.sql.*;
import java.sql.*;
public class DBTest {
String foo = "Not Connected";
int bar = -1;
public void init() {
try{
Context ctx = new InitialContext();
if(ctx == null )
throw new Exception("Boom - No Context");
DataSource ds =
(DataSource)ctx.lookup(
"java:comp/env/jdbc/TestDB");
if (ds != null) {
Connection conn = ds.getConnection();
if(conn != null) {
foo = "Got Connection "+conn.toString();
Statement stmt = conn.createStatement();
ResultSet rst =
stmt.executeQuery(
"select id, foo, bar from testdata");
if(rst.next()) {
foo=rst.getString(2);
bar=rst.getInt(3);
conn.close();
}catch(Exception e) {
e.printStackTrace();
public String getFoo() { return foo; }
public int getBar() { return bar;}
Now when I am trying to run this on browser, everything goes fine except it doesn't show the expected results, instead of that it shows following in the browser:-
Results
Foo Not Connected
Bar -1
Can anybody help me out as to why I am getting such result while everything is right from my side. Is the program unable to connect to the database or it is not supporting the JDBC version that I am using.
Thanks in advance
Regards
VikasOh, I think this is not the right place to post this message. I have placed the same in other place of this forum. please ignore this post here!!
MJ, by the way the link that you suggested is not useful to me.
Thank you -
I am using TABLE(CAST()) operation in PL/SQL and it is returning me no data.
Here is what I have done:
1. Created Record type
CREATE OR REPLACE TYPE target_rec AS OBJECT
target__id NUMBER(10),
target_entity_id NUMBER(10),
dd CHAR(3),
fd CHAR(3),
code NUMBER(10),
target_pct NUMBER,
template_nm VARCHAR2(50),
p_symbol VARCHAR2(10),
pm_init VARCHAR2(3),
target_name VARCHAR2(20),
targe_type VARCHAR2(30),
target_caption VARCHAR2(30),
sort_order NUMBER (4)
2. Created Table type
CREATE OR REPLACE TYPE target_arr AS TABLE OF target_rec
3. Created Stored procedure which accepts parameter of type target_arr and runs the Table(Cast()) function on it.
Following is the simplified form of my procedure.
PROCEDURE get_target_weights
p_in_template_target IN target_arr,
p_out_count OUT NUMBER,
IS
BEGIN
SELECT count(*) into p_out_count
FROM TABLE(CAST(p_in_template_target AS target_arr)) arr;
END;
I am calling get_target_weights from my java code and passing p_in_template_target with 10140 records.
Scenario 1: If target_pct in the last record is 0, p_out_count returned from the procedure is 0.
Scenario 2: If target_pct in the last record is any other value(say 0.01), p_out_count returned from the procedure is 10140.
Please help me understand why the Table(Cast()) is not returning the correct results in Scenario 1. Also adding or deleting any record from the test data returns the correct results (i.e. if keep target_pct in the last record as 0 but add or delete any record).
Let me know how can I attach the test data I am using to help you debugging as I don’t see any Attach file button on Post Message screen on the forum.I am not able to reproduce this problem with a small data set. I can only reproduce with the data having 10140 records.
I am not sure if this is the memory issue as adding a new record also solves the problem.
This should not be the error because of wrong way of filling the records in java as for testing purpose I just saved the records which I am sending from java in a table. I updated the stored procedure as well to read the data from the table and then perform TABLE(CAST()) operation. I am still getting 0 as the output for scenario 1 mentioned in my last mail.
Here is what I have updated:
1. Created the table target_table
CREATE Table target_table
target_id NUMBER(10),
target_entity_id NUMBER(10),
dd CHAR(3),
fd CHAR(3),
code NUMBER(10),
target_pct NUMBER,
template_nm VARCHAR2(50),
p_symbol VARCHAR2(10),
pm_init VARCHAR2(3),
target_name VARCHAR2(20),
target_type VARCHAR2(30),
target_caption VARCHAR2(30),
sort_order NUMBER (4)
2. Inserted data into the table : The script has around 10140 rows. Pls let me know how can I send it to you
3. Updated procedure to read data from table and stored into variable of type target_arr. Run Table(cast()) operation on target_arr and get the count
PROCEDURE test_target_weights
IS
v_target_rec target_table%ROWTYPE;
CURSOR wt_cursor IS
Select * from target_table;
v_count NUMBER := 1;
v_target_arr cws_target_arr:= target_arr ();
v_target_arr_rec target_rec;
v_rec_count NUMBER;
BEGIN
OPEN wt_cursor;
loop
fetch wt_cursor into v_target_rec; -- fetch data from table into local record.
exit when wt_cursor%notfound;
--move data into target_arr
v_target_arr_rec := cws_curr_pair_entity_wt_rec(v_target_rec target_id,v_target_rec. target_entity_id,
v_target_rec.dd,v_target_rec.fd,v_target_rec.code,v_target_rec.target_pct,
v_target_rec.template_nm,v_target_rec.p_symbol,v_target_rec.pm_init,v_target_rec.template_name,
v_target_rec.template_type,v_target_rec.template_caption,v_target_rec.sort_order);
v_target_arr.extend();
v_target_arr(v_count) := v_target_arr_rec;
v_count := v_count + 1;
end loop;
close wt_cursor;
-- run table cast on target_arr
SELECT count(*) into v_rec_count
FROM TABLE(CAST(v_target_arr AS target_arr)) arr;
DBMS_OUTPUT.enable;
DBMS_OUTPUT.PUT_LINE('p_out_count ' || v_rec_count);
DBMS_OUTPUT.PUT_LINE('v_count ' || v_count);
END;
Output is
p_out_count 0
v_count 10140
Expected output
p_out_count 10140
v_count 10140 -
Shouldn't using WITH return the same results as if you'd put the results in a table first?
First off, here's my version info:
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
I just reread the documentation again on the subquery factoring clause of SELECT statement, and I didn't see any restrictions that would apply.
Can someone help me understand why I'm getting different results? I'd like to be able to use the statement that creates MAT3, but for some reason it doesn't work. However, when I break it up and store the last TMP subquery in a table (MAT1), I'm able to get the expected results in MAT2.
Sorry if the example seems a little esoteric. I was trying to put something together to help illustrate another problem, so it was convenient to use the same statements to illustrate this problem.
drop table mat1;
create table mat1 as
with skus as (
select level as sku_id
from dual
connect by level <= 1000
tran_dates as (
select to_date('20130731', 'yyyymmdd') + level as tran_date
from dual
connect by level <= 31
sku_dates as (
select s.sku_id,
t.tran_date,
case when dbms_random.value * 5 < 4
then 0
else 1
end as has_changes,
round(dbms_random.value * 10000, 2) as unit_cost
from skus s
inner join tran_dates t
on 1 = 1
select d.sku_id,
d.tran_date,
d.unit_cost
from sku_dates d
where d.has_changes = 1
drop table mat2;
create table mat2 as
select m.sku_id,
m.tran_date,
m.unit_cost,
min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
from mat1 m
drop table mat3;
create table mat3 as
with skus as (
select level as sku_id
from dual
connect by level <= 1000
tran_dates as (
select to_date('20130731', 'yyyymmdd') + level as tran_date
from dual
connect by level <= 31
sku_dates as (
select s.sku_id,
t.tran_date,
case when dbms_random.value * 5 < 4
then 0
else 1
end as has_changes,
round(dbms_random.value * 10000, 2) as unit_cost
from skus s
inner join tran_dates t
on 1 = 1
tmp as (
select d.sku_id,
d.tran_date,
d.unit_cost
from sku_dates d
where d.has_changes = 1
select m.sku_id,
m.tran_date,
m.unit_cost,
min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
from tmp m
select count(*) from mat2;
select count(*) from mat3;
from tmp m
select count(*) from mat2;
select count(*) from mat3;
select count(*) from mat2;
COUNT(*)
31000
Executed in 0.046 seconds
select count(*) from mat3;
COUNT(*)
0
Executed in 0.031 secondsI think there's something else going on.
I made the change you suggested, with a slight modification to retain the same functionality of flagging ~80% of the rows as not having changes. I then copied that section of my script - included below - and pasted it into my session twice. Unfortunately, I got different results each time. I have had a number of strange problems when using the WITH clause, which is one of the reasons I jumped at posting something here when I encountered it again in this context.
Can you help me understand why this would happen?
drop table mat3;
create table mat3 as
with skus as (
select level as sku_id
from dual
connect by level <= 1000
tran_dates as (
select to_date('20130731', 'yyyymmdd') + level as tran_date
from dual
connect by level <= 31
sku_dates as (
select s.sku_id,
t.tran_date,
case when dbms_random.value(1,100) * 5 < 400
then 0
else 1
end as has_changes,
round(dbms_random.value * 10000, 2) as unit_cost
from skus s
inner join tran_dates t
on 1 = 1
tmp as (
select d.sku_id,
d.tran_date,
d.unit_cost
from sku_dates d
where d.has_changes = 1
select m.sku_id,
m.tran_date,
m.unit_cost,
min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
from tmp m
select count(*) from mat2;
select count(*) from mat3;
152249 < mattk > drop table mat3;
Table dropped
Executed in 0.016 seconds
152249 < mattk > create table mat3 as
2 with skus as (
3 select level as sku_id
4 from dual
5 connect by level <= 1000
6 ),
7 tran_dates as (
8 select to_date('20130731', 'yyyymmdd') + level as tran_date
9 from dual
10 connect by level <= 31
11 ),
12 sku_dates as (
13 select s.sku_id,
14 t.tran_date,
15 case when dbms_random.value(1,100) * 5 < 400
16 then 0
17 else 1
18 end as has_changes,
19 round(dbms_random.value * 10000, 2) as unit_cost
20 from skus s
21 inner join tran_dates t
22 on 1 = 1
23 ),
24 tmp as (
25 select d.sku_id,
26 d.tran_date,
27 d.unit_cost
28 from sku_dates d
29 where d.has_changes = 1
30 )
31 select m.sku_id,
32 m.tran_date,
33 m.unit_cost,
34 min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
35 from tmp m
36 ;
Table created
Executed in 0.53 seconds
152250 < mattk > select count(*) from mat2;
COUNT(*)
0
Executed in 0.047 seconds
152250 < mattk > select count(*) from mat3;
COUNT(*)
31000
Executed in 0.047 seconds
152250 < mattk >
152251 < mattk > drop table mat3;
Table dropped
Executed in 0.016 seconds
152252 < mattk > create table mat3 as
2 with skus as (
3 select level as sku_id
4 from dual
5 connect by level <= 1000
6 ),
7 tran_dates as (
8 select to_date('20130731', 'yyyymmdd') + level as tran_date
9 from dual
10 connect by level <= 31
11 ),
12 sku_dates as (
13 select s.sku_id,
14 t.tran_date,
15 case when dbms_random.value(1,100) * 5 < 400
16 then 0
17 else 1
18 end as has_changes,
19 round(dbms_random.value * 10000, 2) as unit_cost
20 from skus s
21 inner join tran_dates t
22 on 1 = 1
23 ),
24 tmp as (
25 select d.sku_id,
26 d.tran_date,
27 d.unit_cost
28 from sku_dates d
29 where d.has_changes = 1
30 )
31 select m.sku_id,
32 m.tran_date,
33 m.unit_cost,
34 min(m.tran_date) over (partition by m.sku_id order by m.tran_date rows between 1 following and 1 following) as next_tran_date
35 from tmp m
36 ;
Table created
Executed in 0.078 seconds
152252 < mattk > select count(*) from mat2;
COUNT(*)
0
Executed in 0.031 seconds
152252 < mattk > select count(*) from mat3;
COUNT(*)
0
Executed in 0.047 seconds -
Hi,
In Microsoft Test Manager I am trying to add a link in the expected reults tab, so the person testing the test case can click on the link without closing down anything. I am able to add the link in the expected results area, but there is no hyperlink coming
up, so the tester would have to copy and paste it. Is there a way around this?
PS. Sorry if this is in the wrong category, I did not see Test Manager.
Thanks,
SteveHi Steve,
This forum is for software developers who are using the Open Specification documentation to assist them in developing systems, services, and applications that are interoperable with Microsoft products. The Open Specifications can be found at:
http://msdn.microsoft.com/en-us/library/cc203350(PROT.10).aspx. Since your post does not appear to be related to the Open Specification documentation set, we would appreciate it if
you could try to post your question in a more relevant forum. Thank you.
Testing with Visual Studio Test Manager (MTM)
https://social.msdn.microsoft.com/Forums/en-US/home?forum=vsmantest&filter=alltypes&sort=lastpostdesc
Josh Curry (jcurry) | Escalation Engineer | Open Specifications Support Team -
I am trying to come up with a plan to write a LabVIEW VI to do the following test. Can you give me a few ideas how to do this in LabVIEW. I am a new to LabView. I think I how to read and write I/O ports and do comparisons. I need a little guidance on the error checking. In simple terms the test will go like this:
I have 16 digital inputs and 16 outputs.
The 16 outputs are turned on in a specific pattern (i.e. 1001000101011101) and then the 16 inputs (i.e. 1000101111111111) are read in after a time delay. The inputs are checked to see if they match the expected results. If they do it's a pass if not it's a f
ailure. This seems pretty straightforward. And I think I have an idea how to do it. Here's the problem. The inputs are changed sequentially so that all possible combinations are tried. The test needs to determine if the resulting input pattern is correct based on the outputs that were sent out. 16 outputs give 65K possible tests. For each tests there would be 1 or 2 possible results with a total of 25 results for the entire 65K possible tests. LabVIEW would have to determine the expected result or results (1 or 2 of the 25) based on the output pattern send out (1 of 65K). Then it would have to compare the actual input pattern received to see if it's a pass or fail?
Any ideas how I can approach this?The 16 outputs are simulating inputs to the device under test, (simulating remote switches and contacts). The object of the test is to test every possible combination to ensure that nothing unexpected happens at the output. The device under test is a logic motor control system and we want to make sure (among other things) that we don’t start or stop the motor when its not suppose to. How can only two tests do that?
I think you are describing how to create an array with the results. But I still don’t know how to determine what the result should be and if it is correct.
I’ve identified 25 possible valid states the motor controller can be in.
I’ve also identified the correct outputs that determine each of the 25 states.
I’ve also
identified the possible valid states you can go to from each (previous) state, You can only get to a valid new state from a previous state if the right combination of inputs is applied (we hope).
If you know what state you are in when start and you know the valid states you can go to and the inputs required to get there, you should be able test the system against that. You verify this by checking the outputs against what they should be. With 65K possible inputs combination, checking them all manually would be quite is a task. Putting this into LabView is my task. -
Ldapsearch does not return the correct result
We are using the Netscape Directory server 4.12. We try to use the Radiator Radius server to query the Directory server for authentication.
When we use the filter:
uid=itkychan
to query if the user is in the Directory, it returns the correct result.
However, we would like to query if the username is in a specified group, e.g. tss_ns_grp, in the Directory, we try to use the following command:
#./ldapsearch -b "o=The xxx University,c=HK" -h hkpu19
"(&(cn=tss_ns_grp)(|(&(objectclass=groupofuniquenames)(uniquemember=uid=itkychan))(&(objectclass=groupofnames)(member=uid=itkychan))))"
but no result is returned.
itkychan is a member of the group tss_ns_grp. The following is a portion of the LDIF file:
dn: cn=tss_ns_grp, o=The xxx University, c=HK
objectclass: top
objectclass: groupOfUniqueNames
objectclass: mailGroup
cn: tss_ns_grp
mail: [email protected]
mailhost: abcd.xxxu.edu.hk
uniquemember: cn=xxxxxxxxxxxxxx, uid=itkychan, ou=xxxu_ITS, o=The xxx University, c=HK
uniquemember: cn=xxxxxxxxxxxxxx, uid=itjcheng, ou=polyu_ITS, o=The xxx University, c=HK
uniquemember: .....You will have to use the full DN for the member and uniquemember e.g.
(...(uniquemember=cn=xxxxxxxxxxxxxx, uid=itkychan, ou=xxxu_ITS, o=The xxx University, c=HK)...) -
Safari is slowly getting more buggy. Actions, such as clicking on a field, don't give the expected results. I may have to click on the red exit button 5 times to get it to work. I get unwanted dropdown menus. I have version 5.1.7 on OS 10.6.8. System is 3 years old.
Are you running low on RAM..?
see > Using Activity Monitor
Is your Hard Drive getting full...?
see > Freeing space on your Mac OS X startup disk
Have you tried Repair Disk Permissions after upgrading Safari...?
see > About Disk Utility's Repair Disk Permissions feature
Reset Safari...? -
Reoder point calculation don't give the expected result
Dear friends,
We are using CBP to set reorder point (MRP type is VM).
We've set the planned delivery time, on MRP2 view, as 120 days.
When we run the forecast, the calculated basic value gives the expected
result, but the reorder point don't.
As we've set Planned delivery time as 120 days, we expect that the
reorder point is "basic value"*4 (4 is the result 120/30). But the result
of the reorder point is is greater (aprox. "basic value"*4,5).
Can you please give me a help on this matter?
Many thanks,
Afonso Pereirayou may get your expected result if you would use VB instead of VM.
Manual Reorder Point Planning
In manual reorder point planning, you define both the reorder level and the safety stock level manually in the appropriate material master.
Automatic Reorder Point Planning
In automatic reorder point planning, both the reorder level and the safety stock level are determined by the integrated forecasting program.
The system uses past consumption data (historical data) to forecast future requirements. The system then uses these forecast values to calculate the reorder level and the safety stock level, taking the service level, which is specified by the MRP controller, and the material's replenishment lead time into account, and transfers them to the material master.
Since the forecast is carried out at regular intervals, the reorder level and the safety stock level are continually adapted to the current consumption and delivery situation. This means that a contribution is made towards keeping stock levels low. -
Simple Query with subquery returns the 'wrong' result
DB version: 11.2
We created about 27 schemas in the last 4 days. The below query confirms that.
SQL > select username, created from dba_users where created > sysdate-4;
USERNAME CREATED
MANHSMPTOM_DEV_01 12 Jul 2012 11:55:16
PRSM01_OAT_IAU 13 Jul 2012 01:51:03
F_SW 11 Jul 2012 17:52:42
FUN_CDD_HK_SIT 09 Jul 2012 15:33:57
CEMSCOMPTOM_UAT_01 12 Jul 2012 11:43:45
STORM02_OAT_IAU 13 Jul 2012 02:06:29
27 rows selected. -------------> Truncated outputFrom DBA_TS_QUOTAS.max_bytes column , we can determine the space quota allocated for a user/schema
SQL > desc dba_ts_quotas
Name Null? Type
TABLESPACE_NAME NOT NULL VARCHAR2(30)
USERNAME NOT NULL VARCHAR2(30)
BYTES NUMBER
MAX_BYTES NUMBER
BLOCKS NUMBER
MAX_BLOCKS NUMBER
DROPPED VARCHAR2(3)So, I wanted to see the allocated space for the users created in the last 4 days. The below query should return only 27 records because the subquery returns only 27 records. Instead it returned 66 records !
select username, tablespace_name, max_bytes/1024/1024 quotaInMB
from dba_ts_quotas
where username in (select username from dba_users where created > sysdate-4);Any idea why ? I know it is not a bug with oracle. Its just I haven't been eating fish lately.Hi,
J.Kiechle wrote:
So, I wanted to see the allocated space for the users created in the last 4 days.DBA_TS_QUOTAS doesn't give the allocated space but rather the maximum allowed for a given user.
J.Kiechle wrote:
The below query should return only 27 records because the subquery returns only 27 records. Instead it returned 66 records !What if your user John has Quotas on 3 tablespace TBS1, TBS2 and TBS3 ?
You cannot expect the outer query to only retrieve at most 27 rows just because the inner query returns 27 rows.
For allocated space by tablespace for recently created users, you'd better query dba_segments. something like :select owner, tablespace_name, trunc(sum(bytes)/1024/1024) alloc_mb
from dba_segments
where owner in (select username from dba_users where created > sysdate - 4)
group by owner, tablespace_name
order by owner, tablespace_name; -
2 queries should return the same result (but they dont...)
hello
i have a following query:
select col1,extract(year from datum) yr, COUNT(*)
from tableA@dblink where
DATUM between '1-jan-1985' and '31-dec-2012'
and col2 > 100 and col2 not in ('999999')
and TRIM(TO_CHAR(col1)) in ('0','1')
group by col1,extract(year from DATUM);the above query returns the count: 143 982 for year 1991
however when i put the filter directly into this year the query returns a different number: 143 917
select col1,extract(year from datum) yr, COUNT(*)
from tableA@dblink where
DATUM between '1-jan-1991' and '31-dec-1991'
and col2 > 100 and col2 not in ('999999')
and TRIM(TO_CHAR(col1)) in ('0','1')
group by col1,extract(year from DATUM);please can you help me understand why is this happening and why these 2 counts are different?
id appreciate any tips
thanks very much
rgdsUserMB wrote:
please can you help me understand why is this happening and why these 2 counts are different?
id appreciate any tipsThis is tricky. All others of cause are right. You must compare date columns with date values.
If you don't explicitly compare as a date value, then it might happen that a string comparison is made.
And if that happens both of your counts would return a wrong result. For example all dates that start with
'4' are not counted. Like '4-jan-1991'. The string '4-jan-1991' is greater then the string '31-dec-1991'. Therefore it would not be included in the count.
I don't think this happened. But the danger is there.
However I have a problem seeing, what the difference between your two counts can be.
The two important parts are the date between filter and the group by condition.
DATUM between '1-jan-1985' and '31-dec-2012'
DATUM between '1-jan-1991' and '31-dec-1991'
group by col1,extract(year from DATUM);The second option returns less results than the first option. Some rows must be missing because of this different filter condition.
If this would be a text comparison then there shouldn't be a difference. Only if it is a date comparison then this could be explained.
Here is an example:
Dates like 31-dec-1991 17:53:14 are included in the first count+group, but not included in the second count.
Why? Because the string '31-dec-1991' is converted into a date. This date is 31-dec-1991 00:00:00 (midnight). Everything from this day, that is not exactly on midnight is greater than this value and therefore not included in your second query.
So as others already pointed out you need to either truncate the date or to compare it a little differently.
select col1,extract(year from datum) yr, COUNT(*)
from tableA@dblink
where DATUM >= to_date('01-01-1991','dd-mm-yyyy')
and DATUM < to_date('01-01-1992','dd-mm-yyyy')
and col2 > 100 and col2 not in ('999999')
and TRIM(TO_CHAR(col1)) in ('0','1')
group by col1,extract(year from DATUM);This should give the same result as your first query.
Note that I changed the month from a month name to a number. This makes it independent from national language settings. E.g. DEZ = German, DEC = American
Edited by: Sven W. on Oct 9, 2012 2:04 PM -
HttpServletRequest.getHeaders("accept") does not return the correct results
I noticed that when I use the invoke the HttpServletRequest.getHeaders("accept") method I do not get the result that I am expecting.
For instance, if the browser passes over the following accept header:
"text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8"
When i invoke the getHeaders() method, instead of it returning me a single value with the string that was passed over, it returns me 4 separate entries. It looks like Iplanet is parsing the header value and breaking it up by commas.
When I run the same code in TomCat, it returns the single comma separated string as I would expect.
Is there anything to keep Iplanet from breaking apart the header value?
I am running Oracle iPlanet Web Server 7.0.15 B04/19/2012 21:52
Thanks In Advance,
Marty
Edited by: MartyJones2 on Aug 30, 2012 2:44 PMYou will have to use the full DN for the member and uniquemember e.g.
(...(uniquemember=cn=xxxxxxxxxxxxxx, uid=itkychan, ou=xxxu_ITS, o=The xxx University, c=HK)...) -
SsIncludeXml does not return the expected output
Hi
In the context of SiteStudio 10gR4:
I want to emulate a simple key-value-map using a static list. The purpose is enable authors to reference frequently-changed text-variables and maintain them in a single file.
When I attempt to use the ssIncludeXml function to place the value on a page, I get no result outputted. I have checked (using the XML-features in Oracle JDeveloper 11g) that the XPath actually evaluates to the expected value (FLAF_VAL_1). When I call the ssGetXmlNodeCount function with the same arguments, I get the result "0".
What am I doing wrong? Is there any limitations on the kinds of XPath expressions that ssIncludeXml will evaluate?
Thanks!
I use this snippet to include the value:
[!--$ ssIncludeXml("KEYVAL_TEST", "//wcm:element[@name='Key'][text()='FLAF_KEY_1']/../wcm:element[@name='Value']/node()") --]And this is my test file in the document KEYVAL_TEST:
<?xml version="1.0" encoding="UTF-8"?>
<wcm:root xmlns:wcm="http://www.stellent.com/wcm-data/ns/8.0.0" version="8.0.0.0">
<wcm:list name="KeyValuePairs">
<wcm:row>
<wcm:element name="Key">FLAF_KEY_1</wcm:element>
<wcm:element name="Value">FLAF_VAL_1</wcm:element>
</wcm:row>
<wcm:row>
<wcm:element name="Key">FLAF_KEY_2</wcm:element>
<wcm:element name="Value">FLAF_VAL_2</wcm:element>
</wcm:row>
</wcm:list>
</wcm:root>Thank you for your reply!! It is great getting some helpful input to help me move forward on this issue!
2. What you are doing in the example is a bit simpler and not really sufficient to express the query, I need to perform.
I need to select the contents of a Value-element based on the value in corresponding Key-element in the row.
This can easily be formulated using XPath. I have already posted a few different XPath expressions that are equvalent for this XML-file. Each of the expressions works perfectly in JDeveloper when querying the XML-file, and yet fails in Site Studio.
The [Technical Reference|http://download.oracle.com/docs/cd/E10316_01/ouc.htm] states:
This script extension is a core Site Studio method that allows any element within managed XML file to be extracted and returned in an Idoc string variable, which be placed directly on a web page as an HTML snippet. The content of the XML node that is being extracted will be further evaluated in the scope of the current layout and therefore can include further server-side Idoc Script, if necessary.
The parameters are:
- dDocName of XML data file
- XPath expression (to identify a specific node or nodes of the XML data file)
Do you know what implementation of the XPath language is used in Site Studio?
Is it a subset of the language (the Technical Reference does not mention this at all)? Which subset?
Here is another XPath expression that works perfectly well in JDeveloper and fails in Site Studio:wcm:root/wcm:list/wcm:row[child::wcm:element[@name='Key']/text()[.='FLAF_KEY_1']]/wcm:element[@name='Value']/node()Any insights are appreciated! :-) -
Initial JNDI context doesn't return the root node
Hi :
I just deployed a java application on Netweaver CE 7.2, when the application starts up, the code is trying to perform a JNDI bind and lookup operations by calling " Context context = new Initialcontext(), context.bind(), context.lookup("...)", the problem is that the initial context returned is not the root context (i.e. "/"), it's something like "webcontainer/applications/vendor-name/application-name/module-name/...." etc.. This is a problem for us because our application is consisted of multiple ears with multiple modues(wars) inside each ear, each module needs to publish a set of services as JNDI entries and those services (JNDI entries) need to be accessible to other ears or modules, which means they have to be published under some public directory (for exmaple the root directory), it can NOT be published under the current context of the current module.
So, my questions is, is there a way to make the "new Initialcontext()" to return the root directory, perhaps by changing the deployment descriptor or configuration settings somewhere? thanks a lot for your help.Hi,
Add this code to the <i>wdDoModifyView()</i> method:
IWDRoadMap rdMap = (IWDRoadMap) view.getElement("<id of the RoadMap UI>");
rdMap.mappingOfOnSelect().addSourceMapping("step","selectedStep");
Edit the StepSelected action to add a parameter called "<i>selectedStep</i>" of type String. After this the signature for the action handler will look like:
public void onActionStepSelected(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent, String selectedStep )
Inside the action handler you can access the selected step simply as "<i>selectedStep</i>".
Regards,
Satyajit.
Maybe you are looking for
-
"Unknown Error" msg. for Officejet J3640
My printer, about 6 months old, has always been unreliable. It refuses to print and it refuses to give me an error number or title. I have searched all the posts here in the forum, I have run a test page (all okay), printed a document from Word 200
-
Confirming information in JTable without TAB?
Hi all, I am using a JTable in my application, but I discovered you have to press 'TAB' or click on another row of the JTable to confirm all the data of the JTable.. Is there a way to solve this problem, so that all data are available without doing a
-
In a call method, I can make an IMPORT?
In a call method, I can make an IMPORT? thanks!!
-
Making some columns non-resizable in JTable
Hi, In my Application i need to make some columns non-resizable and some columns needs resizability. If anybody done this,please help me. Thanks in advance. Vattikuti
-
How to uninstall upgrade 7 on iphone 4
How do I uninstall recent update on my iphone 4?