Perform Joins on the InfoCubes

Hi,
As we know there is a new feature to perform JOIN's on the InfoCubes by using InfoSets.
So, to perform reporting on the JOIN criteria of the InfoCubes, is the only process we have is:
1.  Included InfoCube1 and InfoCube2 in the InfoSet
2. Include InfoSet in the MultiProvider and generate the reports.
Is my understanding correct?
If I want to generate the reports on a MultiProvider based on the two criteria's i.e:
1. Join of two cubes
2. Union of two cubes
How to accomplish that task?
Please Advice, I will assign the points.
Thanks,

Simon is correct ..
A union operation is used to combine the data from  objects in a MultiProvider. Here, the system constructs the union set of the data sets involved; all the values of these data sets are combined.
As a comparison: InfoSets are created using joins. These joins only combine values that appear in both tables. In contrast to a union, joins form the intersection of the tables.
<b>Multiprovider -</b>
http://help.sap.com/saphelp_nw2004s/helpdata/en/52/1ddc37a3f57a07e10000009b38f889/content.htm
<b>Infosets -</b>
http://help.sap.com/saphelp_nw2004s/helpdata/en/ed/084e3ce0f9fe3fe10000000a114084/content.htm
<b>Technical Modelling Aspects -</b>
<b>Infosets -</b>
You can include any DataStore object, InfoCube or InfoObject of type Characteristic with Master Data in a join. A join can contain objects of the same object type, or objects of different object types. You can include individual objects in a join as many times as you want. Join conditions connect the objects in a join to one another (equal join-condition). A join condition determines the combination of individual object records that are included in the results set.
<b>Multiproviders -</b>
Technically there are no restrictions with regard to the number of InfoProviders that can be included in a MultiProvider. However, we recommend that you include no more than 10 InfoProviders in a single MultiProvider, otherwise splitting the MultiProvider queries and reconstructing the results for the individual InfoProviders takes a substantial amount of time and is generally counterproductive. Modeling MultiProviders with more than 10 InfoProviders is also highly complex
Hope it Helps
Chetan
@CP..

Similar Messages

  • Regarding Performance concerns during the creation of Infocube

    hai
    im going to create the infocube on top of ODS.
    Pls tell me some design tips for performance things during the creation of Infocube like partitioning , Indexes...
    Basically im loading from Oracle Databasetables by using DB-COnnect .
    pls tell me
    i ll assing the points
    bye
    rizwan

    hi Rizwan,
    check these:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    assign points if useful ***
    Thanks,
    Raj

  • Loading performance of the infocube & ODS ?

    Hi Experts,
    Do we need to turn off the aggregates on the infocubes before loading so that it will decrease the loading time or it doesn't matter at all, I mean if we have aggregates created on the infocube..Is that gonna effect in anyway to the loading of the cube ? Also please let me know few tips to increase the loading performance of a cube/ods. Some of them are
    1. delete index and create index after loading.
    2. run paralled processes.
    3. compression of the infocube , how does the compression of an infocube decrease the loading time ?
    Please throw some light on the loading performance of the cube/ods.
    Thanks,

    Hi Daniel,
    Aggregates will not affect the data loading. Aggregates are just the views similar to InfoCube.
    As you mentioned some performance tuning options while loading data:
    Compression is just like archiving the InfoCube data. Once compressed, data cannot be decompressed. So need to ensure the data is correct b4 Compressing. When you compress the data, you will have some free space available, which will improve data loading performance.
    Other than the above options:
    1.If you have routines written at the transformation level, just check whether it is tuned properly.
    2.PSA partition size: In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
    The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.
    3. Export Datasource:The Export DataSource (or Data Mart interface) enables the data population of InfoCubes and ODS Objects out of other InfoCubes.
    The read operations of the export DataSource are single threaded (i.e. sequential). Note that during the read operations u2013 dependent on the complexity of the source InfoCube u2013 the initial time before data is retrieved (i.e. parsing, reading, sorting) can be significant.
    The posting to a subsequent DataTarget can be parallelized by ROIDOCPRMS settings for the u201Cmyselfu201D system. But note that several DataTargets cannot be populated in parallel; there is only parallelism within one DataTarget.
    Hope it helps!!!
    Thanks,
    Lavanya.

  • Physical query generation: unneeded dimension tables get joined to the fact

    Hi there!
    The setup is the following:
    There is a logical fact table which is joined to 7 logical dimensions, it has 4 table sources which correspond to different time dimension levels (all other dimensions are mapped to Detail level).
    Time dimension logical table also has 4 different table sources (for days, months, quarters, and years).
    The data source is an Oracle Database 11gR2.
    The problem is:
    No matter what the logical query is, in the physical query all 7 joins are performed, even if the resulting data is then simply discarded. This results in very bad query performance.
    I feel that it is somehow related to the level-based fragmentation (since, for instance, inclusion of time dimension columns in SELECT list (not in WHERE) seems to affect physical queries), but lack sufficient knowledge to solve this problem or put up with it.
    My questions are the following:
    1) Have you ever encountered such a situation?
    2) Can unneeded joins be eliminated?
    2.1) If yes, how?
    2.2) If not, then why are they needed?
    Thanks in advance!

    Physical level:
    D01-D06 - ordinary physical tables.
    D_DATES - all time levels from dates to years, D_MONTHS - time levels from months to years, also D_QUARTERS and D_YEARS.
    F_DAILY - fact table joined to all of the D01-D06 and to D_DATES, F_MONTHLY - joined to D01-D06 and D_MONTHS, also F_QUARTERLY and F_YEARLY. All measure columns are the same.
    Logical level:
    D01-D06 correspond to ordinary logical tables with a single table source. Logical dimensions are created.
    D_TIME is a logical time dimension with four levels (dates, months, quarters, and years) and four table sources ( D_DATES, D_MONTHS, D_QUARTERS, and D_YEARS ).
    F is a fact table with four logical table sources ( F_DAILY, F_MONTHLY, F_QUARTERLY, and F_YEARLY ) with aggregation content levels set correspondingly.
    OBIEE correctly picks physical table sources for different time levels, but generates extremely inefficient SQL (joining all dimension sources in a WITH-subquery, doing ROW_NUMBER over a result set, and then discarding half the columns, which were not needed to start with).

  • No join in the query involving 2 tables

    Friends,
    I saw a strange plan for one query in TESTING DB today. Although 2 tables are involved i dont see any join , NL/HJ/SMJ !!
    Can you please tell why this might be happening?
    Note: i am not facing any performance issue but curious to know what type of optimization oracle is doing here.
    It would be great if you could direct me the relevant link in the documentation.
    Query text and plan :
    SELECT FIRST_NAME
      FROM CAMPA.TABLE_A
    WHERE NAME_ID =
              (SELECT NAME_ID
                 FROM CAMPA.TABLE_B
                WHERE ban=:b1);
    Plan hash value: 311916800
    | Id  | Operation                             | Name                  | Rows  | Bytes | Cost  | Pstart| Pstop |
    |   0 | SELECT STATEMENT                      |                       |     1 |    12 |     2 |       |       |
    |   1 |  PARTITION RANGE SINGLE               |                       |     1 |    12 |     1 |   KEY |   KEY |
    |   2 |   TABLE ACCESS BY LOCAL INDEX ROWID   | TABLE_A               |     1 |    12 |     1 |   KEY |   KEY |
    |*  3 |    INDEX UNIQUE SCAN                  | TABLE_A_PK            |     1 |       |     1 |   KEY |   KEY |
    |   4 |     PARTITION RANGE SINGLE            |                       |     2 |    30 |     1 |   KEY |   KEY |
    |   5 |      TABLE ACCESS BY LOCAL INDEX ROWID| TABLE_B               |     2 |    30 |     1 |   KEY |   KEY |
    |*  6 |       INDEX RANGE SCAN                | TABLE_B_2IX           |     2 |       |     1 |   KEY |   KEY |
    Predicate Information (identified by operation id):
       3 - access("NAME_ID"= (SELECT "NAME_ID" FROM "CAMPA"."TABLE_B" "TABLE_B"
                  WHERE "BAN"=TO_NUMBER(:B1)))
       6 - access("BAN"=TO_NUMBER(:B1))
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production

    With equality condition can i get NL/HJ join where the outer row source comes from the subquery?
    What condition/pre-requisite of NL/HJ join is not met in this case?Don't use an equality condition with a subquery - this implicitly makes available to the CBO an optimisation strategy that can only be used if the subquery is guaranteed to return a single row.
    So either use IN rather than = or use a JOIN not a SUBQUERY.
    It's a bit of a strange question.
    How do I remove this optimisation?
    Take away the features that make that optimisation a possibility.
    Edited by: Dom Brooks on May 13, 2013 2:20 PM

  • Checking the size of the infocube

    Hi,
    How can we check the size of the infocube with large volume of data? From the SAP standard practice what is the optimal size that an infocube can consist of data with out an impact on the query performance?

    Hi
    When estimating the size of an InfoCube one must consider the size of the fact table and dimension tables. However, the size of the fact table is the most important, since in most cases it will be 80-90% of the total storage requirement for the InfoCube.
    The following shows how to calculate the size of an InfoCube, including the dimension tables and the fact tables.
    Size of a dimension table
    To calculate the size of a dimension table:
    u2022     The size of one record of the dimension table can be calculated by summing the number of characteristics in the dimension table at 4 bytes each. Also, add four bytes for the key of the dimension table.
    u2022     Calculate the number of records in the dimension table
    u2022     Multiply the size of 1 record by the number of records
    Assume that the dimension table indexes will take up as much space as the dimension table itself.
    Size of a Fact Table
    To calculate the size of a fact table:
    u2022     Count the number of key figures the table will contain, assuming a quantity key figure requires 9 bytes, a currency key figure requires 9 bytes, and other numeric fields require 4 bytes (or more).
    u2022     Every dimension table requires a foreign key in the fact table, so add 4 bytes for each key. Donu2018t forget the three standard dimensions.
    u2022     Add these figures together to get the size of 1 record
    u2022     Calculate the number of records in the fact table
    u2022     Multiply the size of 1 record by the number of records
    Assume that the fact table indexes will take up as much space as the fact table itself. This is more index space than is usually required in most OLTP systems. In the fact table, many of the columns will be foreign keys with pointers to dimension tables. Each of them will have an index.
    Add an additional 150% for temporary table space and aggregate tables. An aggregate contains both new dimension and data tables. A rule of thumb is that all the aggregates will be the size of the fact table.
    Hope this Helps
    Regards
    Shilpa

  • I have request in the report level but the same is missing in the infocube

    Dear Experts,
    I have request in the report level but the same is missing in the compressed infocube level. What could be the cause? does the compressed infocube deletes the request ? if so, I could able to view other requests under infocube manage level.
    Kindly provide with enough information.
    Thanks.............

    Hi
    Compressing InfoCubes
    Use
    When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube.
    However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in reporting, as the system has to perform aggregation using the request ID every time you execute a query.
    Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request ID. You must be absolutely certain that the data loaded into the InfoCube is correct.
    Features
    You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and can schedule it with a process chain.
    Compressing one request takes approx. 2.5 ms per data record.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
    If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record.
    If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing.
    If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as the compression. In this case, the entries where all key figures are equal to 0 are deleted from the fact table.
    Zero-elimination is permitted only for InfoCubes, where key figures with the aggregation behavior ‘SUM’ appear exclusively. In particular, you are not permitted to run zero-elimination with non-cumulative values.
    For non-cumulative InfoCubes, you can ensure that the non-cumulative marker is not updated by setting the indicator No Marker Updating. You have to use this option if you are loading historic non-cumulative value changes into an InfoCube after an initialization has already taken place with the current non-cumulative. Otherwise the results produced in the query will not be correct. For performance reasons, you should compress subsequent delta requests.
    Edited by: Allu on Dec 20, 2007 3:26 PM

  • Performance issue with the ABAP statements

    Hello,
    Please can some help me with the below statements where I am getting performance problem.
    SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
    SORT CHDATE by DOC_NUMBER.
    SORT SOURCE_PACKAGE by DOC_NUMBER.
    LOOP AT CHDATE INTO WA_CHDATE.
       READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
       WA_CHDATE-DOC_NUMBER BINARY SEARCH.
       MOVE WA_CHDATE-CREATEDON  to WA_CIDATE-CREATEDON.
    APPEND WA_CIDATE to CIDATE.
    ENDLOOP.
    I wrote an above code for the follwing requirement.
    1. I have 2 tables from where i am getting the data
    2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
    3. While accessing the 2 table and copying to thrid table i have to modify the field.
    I am getting performance issues with the above statements.
    Than
    Edited by: Rob Burbank on Jul 29, 2010 10:06 AM

    Hello,
    try a select like the following one instead of you code.
    SELECT field field2 ...
    INTO TABLE it_table
    FROM table1 AS T1 INNER JOIN table2 AS T2
    ON t1-doc_number = t2-doc_number

  • Can we perform Join operation using SQLCall with Datatabae Query

    Hi,
    I am working on Toplink SQLCall query. I am performing join operation but, it is giving error.
    so please, any can tell me . we can perform join operation using SQLCall with Database Query
    Thanking You.

    You can use joining with SQLCall queries in TopLink, provided your SQL returns all of the required fields.
    What is the query you are executing and what error are you getting?

  • Perform join in VC without using BI query

    Hi experts
    Is it possible to to perform join on two tables in VC without using BI query.
    I have data in two tables as follows
    Month       Closed
    6                  2
    7                  1
    9                  2
    and
    Month Open
    8                 1
    9                 1
    10               1
    and want result as
    Month    Open        closed
    6               --               2             
    7               --               1
    8               1               --
    9               1               2
    10             1               --

    Hi experts,
    i´m looking for a solution for the problem described above.
    I will join a standard data source from the CE-Server (e.g. BI_BPM_MY_PROCESSES_DS) with our own reporting source. So I don´t have the opportunity to use a backend system (e.g. a BI system) to join the data sources!
    Is there an solution or workaround available?
    Thanks and regards,
    Bastian

  • Illegal cross join within the same dimension

    Hi,
    When certain fields are selected within the presentation table an "illegal cross join" error is returned by the BI Server. However if a FACT is added from one of the other presentation tables the "illegal cross join" error goes away. we need to query without fact column.
    We are getting following error
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14065] Illegal cross join within the same dimension caused by incorrect subject area setup: [ CALL_CENTER.COUNSELOR_MANAGER T782130] with [ CALL_CENTER.COUNSELOR_HR T781594 On CALL_CENTER.COUNSELOR_HR.MASTER_STAFF_COUNSELOR_ID = CALL_CENTER.MASTER_STAFF_COUNSELOR.MASTER_STAFF_COUNSELOR_ID, CALL_CENTER.MASTER_STAFF_COUNSELOR T781739] (HY000)
    Can anybody help me solving this issue.
    Thanks,
    KS.

    Please give us an example of what you need.
    OBIEE perform a query in the dimension or through the fact table.
    You can't join two dimensions in the repository without going through a fact table.
    If you need to query without fact column, it's because you have design two dimensions where
    normally you can do one.
    You have then two solutions :
    * change the design of your logical model to make only one dimension.
    * use the OBIEE logical SQL in answer.
    http://gerardnico.com/wiki/dat/obiee/bi_server/design/obiee_logical_sql
    Success
    Nico

  • Did Infocube compression process locks the infocube?

    HI All,
    First of all thanks for ur active support and co-operation.
    Did the compression process locks the cube?, my doubt is, while the compression process is running on a cube, if i try to load data into the same cube, will it allow or not? please reply me as soon as u can.
    Many Thanks in Advance.
    Jagadeesh.

    hi,
    Compression: It is a process used to delete the Request IDs and this saves space.
    When and why use infocube compression in real time?
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Check these Links:
    http://www.sap-img.com/business/infocube-compression.htm
    compression is done to increase the performance of the cube...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/b2/e91c3b85e6e939e10000000a11402f/frameset.htm
    Infocube compression and aggregate compression are mostly independent.
    Usually if you decide to keep the requests in the infocube, you can compress the aggregates. If you need to delete a request, you just have to rebuild an aggregate, if it is compressed. Therefore there are no problems in compressing aggregates, unless the rebuild of the aggregates take a lot of time.
    It does not make sense to compress the infocube without compressing the aggregates. The idea behind compressing is to speed up the infocube access by adding up all the data of the different requests. As a result you get rid of the request number. All other attributes stay the same. If you have more than one record per set of characteristics, the key figures will add the key figures by aggregat characteristic (ADD, MIN, MAX etc.). This will reduce the number of records in the cube.
    Example:
    requestid date 0material 0amount
    12345 20061201 3333 125
    12346 20061201 3333 -125
    12346 20061201 3333 200
    will result to
    requestid date 0material 0amount
    20061201 3333 200
    In this case 2 records are saved.
    But once the requestid is lost (due to compression) you cannot get it back.
    Therefore, once you compressed the infocube, there is no sense in keeping the aggregates uncompressed. But as long as your Infocube is uncompressed you can always compress the aggregates, without any problem other than rebuild time of the aggregates.
    hope it helps..

  • Performance JOIN specified in FROM rather than WHERE

    Greetings,
    Now that 9i can use the JOIN keyword, I was wondering if there is a performance difference between the following two approaches:
    1.
    from
    table1 t1 join table2 t2 on t1.field1 = t2.field1
    2.
    from
    table1 t1, table2 t2
    where
    t1.field1 = t2.field1
    I'm new to using 9i, but in DB2 the first approach is much more efficient than the 2nd approach. I was wondering if Oracle might have similar advantages?
    Many thanx for your thoughts.
    Regards,
    -matt

    Lets take a look:
    SQL> set autotrace traceonly
    SQL> select empno
    2 from emp e, dept d
    3 where e.deptno=d.deptno;
    14 rows selected.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 NESTED LOOPS
    2 1 TABLE ACCESS (FULL) OF 'EMP'
    3 1 INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)
    Statistics
    0 recursive calls
    2 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    900 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    14 rows processed
    SQL> select empno
    2 from emp e join dept d on e.deptno=d.deptno;
    14 rows selected.
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 NESTED LOOPS
    2 1 TABLE ACCESS (FULL) OF 'EMP'
    3 1 INDEX (UNIQUE SCAN) OF 'PK_DEPT' (UNIQUE)
    Statistics
    0 recursive calls
    2 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    900 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    14 rows processed
    SQL>
    Execution wise and statistically these two queries are the same. Experiment with more complex joins, maybe the results will differ.

  • TS3694 My Ipod is not recognized by Itunes with my Windows 8 PC. Works fine with Windows 7 PC. Device sync test says "No device found". Already performed all of the Ipod device troubleshooting including reload Itunes, stop start Ipod device, changed drive

    Need help? Some of my Ipods are not recognized by Itunes with my Windows 8 PC. (Ipod Nano 4th gen  and Ipod Nano 6th gen), but on my Windows 7 PC, they work fine.  My Ipod 3rd gen and Ipod shuffle work both on Windows 8 and 7 PC's.  On the non-working Ipods, the  Device sync test says "No device found". Already performed all of the Ipod device troubleshooting including reload Itunes, stop start Ipod device, changed drive letter...
    any help is appreciated. Chris4sail

    Hello there, chris4sail.
    The following Knowledge Base article offers up some great step-by-step instructions on troubleshooting your iPod not being recognized in iTunes:
    iPod not recognized in My Computer and in iTunes for Windows
    http://support.apple.com/kb/ts1369
    Thanks for reaching out to Apple Support Communities.
    Cheers,
    Pedro.

  • Unable to view data in the InfoCube as well as in the query

    Hi all,
    I have done the Init Load into an InfoCube 0Pur_c01. The Monitoring status is Green. I checked the Qm status which is green. There is data in the PSA.
    However, I am unable to view data in the InfoCube. Also when I execute the report I am receiving 'Application Data not found'error.
    It looks to be a strange sitaution.
    Please help me out
    Regards
    YJ

    Hi,
    I hope, some times the the unavailability of "PROCESSKEY" value for the records delivers like this.
    Refer the note: 353042
    And also refer the links:
    Re: Problem extracting 2LIS_03_BX into 0IC_C03
    Re: Records Not Added
    And also search this forum with "PROCESSKEY"
    With rgds,
    Anil Kumar Sharma .P

Maybe you are looking for

  • How can I get a REAL person to ask a question about a movie rental?

    I rented a movie on 3/1 and it said "9 hours" to download...and yes I've a fast provider....have not watched the movie...sign on today and get a 'you have 4 hours left to watch your rental' notice.....I have to go to work! I can't watch it until tomo

  • How do you get a refund for an app that was purchased through the new App Catalog for a WebOS app!

    Hi, I just purchased a Dictionary app from Palm's App Catalog, the product is listed as: English Dictionary and Thesarus by Ultralingua, Inc. and it was not something that was advertised correctly and hence is not something that I want. It costs a wh

  • Read contents of file into outputstream

    Can anyone suggest that what are the best methods to read contents of a file (better cater to both conditions: big file size and small file size) into outputstream and send through socket... Thanks.

  • Mac Cube & Dell P2011H -- Stretched Display View

    I have a Mac Cube (PowerMac5,1) with an OEM video card (ATY,Rage128Pro). It is running Mac OS 10.4.11. Recently, I purchased a Dell Professional P2011H 20" Monitor: http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=bsd&cs=04&sku= 320-

  • Load multiple images from directory?

    What is the best way to use CF to load multiple images from a server directory please? I have CF calling a stored procedure that returns an array of file names. I am using a Flex front end, and wonder if I should just pass the file name array to Flex