0amount not accepting more than 10 million
I am using 0Amount in DSO and loading it from Flat file.
The PSA settings: is in char format and the field in datasource is length 9, decimal 2, External 12, and Format is internal.
I am getting the error below while loading the DSO..
Error 'Overflow converting from '10894199.69' ' when assigning application structure, line 502 , contents "1000 CAH01050105090025 4008212200020 XIF025 ... "
any quick help will be rewarded and appreciated.
Thanks
Shan
Edited by: Shantanu Mukherjee on Apr 6, 2011 2:52 PM
I have resolved it myself.
Had an issue with the datasource.
Shan
Similar Messages
-
In Query mode, date items does not accepts more than 10 characters..Why?
Dear All,
In my form, While querying my date items, it does not accepts more than 10 Characters. Could it be changeable.
Actually i want to search like greater than specified date( >01/01/2007 ).
But i can specify like >01/01/07. After tabbing out from that item, it displays like >01/01/200.
How to overcome this issue.
Please help.
Regards,
BalajiYou are absoletely correct Francois!
Previously i have made a mistake that both of my maximum length and query length property to 20. That time it was not worked. Now i changed my maximum length to the old one like 11 and i changed only the query length to 20.
Now it is working. I understood the concept wrongly.
Thanks Francois! -
External table is not accepting more than 255 Characters
Hi,
I'm new to External table.. Somehow External table is not accepting more than 255 Characters even though I'm using VARCHAR2(4000 BYTES).. Can you please help me..
CREATE TABLE DM_CL_ExterTbl_Project
project_name VARCHAR2(80 BYTE),
project_id VARCHAR2(20 BYTE),
work_type VARCHAR2(100 BYTE),
work_description VARCHAR2(4000 BYTE)
ORGANIZATION EXTERNAL
TYPE ORACLE_LOADER
DEFAULT DIRECTORY UTL_FILE_DIR
ACCESS PARAMETERS
records delimited by '#(_@p9#' SKIP 1
logfile 'pp.log'
badfile 'pp1.bad'
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' and '"'LDRTRIM
missing field values are null
REJECT ROWS WITH ALL NULL FIELDS
project_name,
project_id,
work_type,
work_description
LOCATION (UTL_FILE_DIR:'TOG_Data_Extract.csv')
REJECT LIMIT UNLIMITED
NOPARALLEL
NOMONITORING
Thanks in advance..
~~ManjuI got the asnwer.. In the filed list I have to specify the datatype & it's Size otherwise by default it will take CHAR(255)..
work_type CHAR(4000) solved the problem..!! -
XSL call URL not accepting more than one parameter
Hi,
I'm using PL/SQL to generate XML/HTML output. EVerything seemed to be going fine, until I started trying to pass more than one parameters in my URL when I call my XSL.
Instead of displaying the HTML as defined in the XSL, it just displays the XML page in an XML format.
Sample output is as follows:
<?xml version="1.0" ?>
<?xml-stylesheet type="text/xsl" href="DP_QUERY_SCHED_PKG3.stylesheet?pthrsh=20&pqsid=2635"?>
- <ARRAY>
- <ROW num="1">
<ACCOUNT_DESC>Accommodation</ACCOUNT_DESC>
<ACTUAL_PERIOD_CREDITS>0</ACTUAL_PERIOD_CREDITS>
<ACTUAL_PERIOD_DEBITS>533307.94</ACTUAL_PERIOD_DEBITS>
<PTD_ACTUAL_NET_ACTIVITY>533307.94</PTD_ACTUAL_NET_ACTIVITY>
</ROW>
- <ROW num="2">
<ACCOUNT_DESC>Accum. Depr. Vehicles</ACCOUNT_DESC>
<ACTUAL_PERIOD_CREDITS>0</ACTUAL_PERIOD_CREDITS>
<ACTUAL_PERIOD_DEBITS>0</ACTUAL_PERIOD_DEBITS>
<PTD_ACTUAL_NET_ACTIVITY>0</PTD_ACTUAL_NET_ACTIVITY>
</ROW>
</ARRAY>It is work fine for me, but I am using DB Prism on IAS. DB Prism is a servlet engine which works as PLSQL cartridge of OAS.
Check in this url for a demo online of an xml page generated in plsql with two stylesheet generated in the db too. http://cocodrilo.exa.unicen.edu.ar:7777/servlets/xml/demo.xmlcomplex?producer=db
You could fine the source of the demo at: http://cocodrilo.exa.unicen.edu.ar:7777/servlets/plsql/demo.startup
Look for "XML Complex generation" demo.
It has 3 stored procedures in plsql demo.complex, demo.news(a,b) demo.news_text.
Best Regards, Marcelo.
PD: If you need more information about DB Prism and XML capabilities with Apache Cocoon look at: http://www.plenix.com/dbprism/
null -
IN Clause issue in oracle -- not accepting more than 1000 expressions
update table_name set col1='Y' where col2 in ('a','b',.......1500 expressions)
Please suggest me the best method to replace the above sql statement..vasanthi b wrote:
update table_name set col1='Y' where col2 in ('a','b',.......1500 expressions)
Please suggest me the best method to replace the above sql statement..The best method is the correct method.
Normalisation.
Col2 is not normalised. That is the core problem.
And trying to hack that failure in the data model with a 1000+ expression clause... that is just plain bloody silly. -
Shell script how getopts can accept more than one string in one var
In shell script how getopts can accept more than one string in one variable DES
Here is the part of shell script that accepts variables from the Shell script but the DES variable does not accepts more than one variable., how to do this ??
When i run the script like below
sh Ericsson_4G_nwid_configuration.sh -n orah4g -d "Ericsson 4g Child" -z Europe/Stockholm
it does not accepts "Ericsson 4g Child" in DES Variable and only accepts "Ericsson" to DES variable.
how to make it accept full "Ericsson 4g Child"
========================================
while getopts “hn:r:p:v:d:z:” OPTION
do
case $OPTION in
h)
usage
exit 1
n)
TEST=$OPTARG
z)
ZONE="$OPTARG"
d)
DES="$OPTARG"
v)
VERBOSE=1
usage
exit
esac
donePlease use code tags when pasting to the boards: https://wiki.archlinux.org/index.php/Fo … s_and_Code
Also, see http://mywiki.wooledge.org/Quotes -
Hi,
We have developed an universe on BI query and developed report on it. But while running this BO query in Web Intelligence we get the following error
A database error occured. The database error text is: Error in MDDataSetBW.GetCellData. MDX result contains too many cells (more than 1 million). (WIS 10901)
This BO query is restricted for one document number.
Now when i check in the BI cube there are not more than 300-400 records for that document number.
If i restrict the BO query for document number, delivery number, material and acknowledged date then the query runs successfully.
Can anyone please help with this issue.follow this article to get the mdx generated by the webi report.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90b02218-d909-2e10-1988-a2ca74547900
then try to execute the same in mdxtest transaction in bw -
My HP Officejet 6500 E709a All-in-One will not print more than one copy at a time.
My HP Officejet 6500 E709a All-in-One will not print more than one copy at a time. This is especially disconcerting at Christmas time when you're trying to get Christmas letters in the mail! Any help?
The cyan portion of the printhead may be clogged. I would suggest running the diagnostics shown here. If three levels of cleaning of the printheads does not resolve the issue you may need a new printhead.
Bob Headrick, HP Expert
I am not an employee of HP, I am a volunteer posting here on my own time.
If your problem is solved please click the "Accept as Solution" button ------------V
If my answer was helpful please click the "Thumbs Up" to say "Thank You"--V -
Analyse a partitioned table with more than 50 million rows
Hi,
I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
ThanksA partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
Justin -
Handling internal table with more than 1 million record
Hi All,
We are facing dump for storage parameters wrongly set.
Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
Please advice is there any other way to handle these kinds of internal table.
P:S we have tried the option of using hashed table, this does not suits our scenario.
Thanks and Regards,
VijayHi
your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
hope this code helps.
G_PACKAGE_SIZE = 50000.
* Using DB Cursor to fetch data in batch.
OPEN CURSOR WITH HOLD DB_CURSOR FOR
SELECT *
FROM ZTABLE.
DO.
FETCH NEXT CURSOR DB_CURSOR
INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
PACKAGE SIZE G_PACKAGE_SIZE.
IF SY-SUBRC NE 0.
CLOSE CURSOR DB_CURSOR.
EXIT.
ENDIF. -
How to get data from large table (more than 9 million rows) by EJB?
I have a giant table, it has more than 9 million rows.
I want to use ejb finders method to get data from this table but always get not enough memory error or time out error,
Can anyone give me solutions?
ThxYour problem may be that you are simply trying to load so many objects (found by your finder) that you are exceeding available memory. For example if each object is 100 bytes and you try to load 1,000,000 objects thats 100Mb of memory gone.
You could try increasing the amount of memory available to OC4J with the appropriate argument on the command line (or in the 10gAS console). For example to make 1Gb available to OC4J you would add the argument:
-Xmx1000m
Of course you need have this available as hard memory on your server or you will incur serious swapping.
Chris -
Increase performance query more than 10 millions records significantly
The story is :
Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
Example textfiles name is transaction.csv
Phone_Number
6281381789999
658889999888
618887897
etc .. more than 10 million rows
From transaction.csv then split to 3 RAM (memory) tables :
1st. table nation (nation_id, nation_desc)
2nd. table operator(operator_id, operator_desc)
3rd. table area(area_id, area_desc)
Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
Given physical External Oracle table name EXT_TRANSACTION with column result is :
Phone_Number Nation_Desc Operator_Desc Area_Desc
======================================
6281381789999 INA SMP SBY
So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
Any advice would be very appreciate.
Thanks.Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom. -
MDX result contains many cells. (more than 1 million)
Hi experts!!!
I have a webi report, but when i try to refresh te report, the report show me the following message:
The database error text is: Error in MDDataSetBW.GetCellData. MDX result contains many cells. (more than 1 million). (WIS 10901)
I was reading that this error is because the report has too many information but it can be solve.
I was reading the following sap notes:
1232751 (this note is for sap bw release 7)
931479 (this note is for sap bw 3.5 Package 17)
We have SAP BW 3.5 and package 22, for this reason this notes are not for me.
Do you know some sap note that solve this problem and works with my sap bw??
I will wait for your answer.
Ruddy Alvarado.Hi!!
This is my mdx query:
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGRXW55RLOXZEFNWEPC11Z], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ], [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LGOGS5KO4DB7KIULHYQZDZRR], [Measures].[4LGOGSD972Z0Q72ARC139FYHJ], [Measures].[4LGOGSKXQ1KQ8TLQX63FJHX7B], [Measures].[4LGOGSSM906FRG57305RTJVX3], [Measures].[4LGOGT0ARYS5A2ON8U843LUMV], [Measures].[4LGOGT7ZAXDUSP83EOAGDNTCN], [Measures].[4LGOGWF77CFHK3BTU79KKHA3B], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS, [ZSIOCOPRO__ZSIOSABOR].[LEVEL01].MEMBERS ), [0CALDAY].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS ), { [0SOLD_TO__0REGION].[GT G20] } ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_DIST].[10CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOSABOR].[5ZSIOCOPRO__ZSIOSABOR], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] } ON COLUMNS , NON EMPTY { [0SOLD_TO__0REGION].[GT G20] } DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
[ZSIOCTCOR/ZSIO_BO_CXC]: SELECT { [Measures].[3Z4D1BQHVZ7VR5IJWHLKNPIL8], [Measures].[3Z4D1BY6EXTL9S202BNWXRHB0], [Measures].[3Z4D1BITD0M68IZ3QNJ8DNJVG], [Measures].[3Z4D1BB4U20GPWFNKTGW3LL5O], [Measures].[3Z4D1B3GB3ER79W7EZEJTJMFW], [Measures].[3Z4D1AGEQ7LMNE9UXH7IZDQAK], [Measures].[3Z4D1AVRS4T1ONCR95C7JHNQ4], [Measures].[480ABQOBFURXYE3L9IUXLSHNB] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [ZSIOCLIE__ZSIOTCOB].[LEVEL01].MEMBERS, [ZSIOCLIE__ZSIORUTA].[LEVEL01].MEMBERS ), [ZSIODAFA].[LEVEL01].MEMBERS ), [ZSIOCLIE].[LEVEL01].MEMBERS ), [ZSIOCLIE__ZSIOCET].[LEVEL01].MEMBERS ), [ZSIOAGENC].[LEVEL01].MEMBERS ), { [ZSIOCLIE__ZSIOREGIO].[90120] } ) DIMENSION PROPERTIES [ZSIOAGENC].[5ZSIOAGENC], [ZSIOCLIE].[2ZSIOCOSIM], [ZSIOCLIE].[2ZSIOLICRE], [ZSIOCLIE].[4ZSIOCLIE], [ZSIOCLIE__ZSIOCET].[5ZSIOCLIE__ZSIOCET], [ZSIOCLIE__ZSIOREGIO].[5ZSIOCLIE__ZSIOREGIO], [ZSIOCLIE__ZSIORUTA].[5ZSIOCLIE__ZSIORUTA], [ZSIOCLIE__ZSIOTCOB].[1ZSIOCLIE__ZSIOTCOB], [ZSIODAFA].[2ZSIODAFA] ON ROWS FROM [ZSIOCTCOR/ZSIO_BO_CXC]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS, [0CALDAY].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), { [0SOLD_TO__0REGION].[GT G20] } ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY { [0SOLD_TO__0REGION].[GT G20] } DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ) DIMENSION PROPERTIES [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
[Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] } ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS, { [0SOLD_TO__0REGION].[GT G20] } ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
How to put it on mdxtest?? -
More than 1 million files on multi-terabyte UFS file systems
How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.
Thanks. You are right on. According to Sun official channels:
Paula Van Wie wrote:
Hi Ron,
This is what I've found out.
No there is no way around the limitation. I would suggest an alternate
file system if possible suggest ZFS as they would get the most space
available as inodes are no longer used.
Like the customer noted if the inode values were increased significantly
and an fsck were required there is the possibility that the fsck could
take days or weeks to complete. So in order to avoid angry customers
having to wait a day or two for fsck to finish the limit was imposed.
And so far I've heard that there should not be corruption using zfs and
raid.
Paula -
Dileama: Does my server accept more than one client
i created a simple server and a client app. but i am unable resolve wether the server is going to accept more than single client. if not, then should i implement threads to accept more than one client on my server.
i created a simple server and a client app. congrats!
but i am unable resolve wether the server is going to accept
more than single client. Not sure what you mean here.... Do you mean "Should I allow it to accept more than one client at a time?" If so, then that's up to you, isn't it?
if not, then should i implement threads to accept more than
one client on my server.If so, you mean. Yes, if you want multiple clients to connect, you have the server socket accept the socket connection from the client and pass that socket to a new thread which handles the connection, then the server socket would be free to accept another connection.
I'm only familiar with the old I/O package, not the New I/O stuff, so this is a bit old school:
ServerSocket ss = new ServerSocket(1234);
while(true) {
Socket s = ss.accept();
newClient(s);
private void newClient(final Socket s) {
Thread t = new Thread() {
public void run() {
try {
BufferedReader in = new BufferedReader(new InputStreamReader(s.getInputStream()));
PrintWriter out = new PrintWriter(new OutputStreamWriter(s.getOutputStream()));
out.println("yes? what is it?");
out.flush();
String line;
while((line = in.readLine()) != null) {
out.println("ha ha, you said '" + line + "'");
out.flush();
} catch(Exception e) {
try {
s.close();
} catch(Exception e) {
t.start();
}
Maybe you are looking for
-
I normally send pdf files to my magazine publishers for print ads that we run. While in InDesign CS2, I export my file with a pdf preset of press quality, compatability of Acrobat 5 (PDF 1.4), and CMYK color (normally Pantone Solid Coated) for all pr
-
OS=vista home premium, 2007 SP2
-
Installation of R/3 4.6B stuck at 82%
Hi I am installing R/3 4.6b on windows 2003 platform. I have created central instance successfully and database instace creation is going on and overall progress has sutck at 82%. Any idea,How to overcome from it. Regards Sukrut s
-
The ppt animations doesn't pause in the player but the audio does.
I've imported an animated PPT into Captivate 7. The audio is added into Captivate. After publishing the project. The ppt animations doesn't pause in the player but the audio does. When you click the play button again the audio and animations are out
-
Hi, We are trying for automatic forwarding of work item from one sap user Inbox to other sap user Inbox using substitution in workflow settings. I am getting work items in both users Inboxes. My requirement is work item should appear in substituted u