Which Data Warehouse application

How can I know which Data Warehaouse application can I use: Express Server, Experss Analyzer or Express Objects? Thanks

What do you mean by this question? What do you want to do?
By the way, if you are thinking of using Express products, you should look into the olap-option instead since it will replace express products if you are using 9.2 or higher.

Similar Messages

  • Sort_area_size in data warehouse applications

    Hi everyone,
    I am working on optmizing the data warehouse application that involves with inserts and updates on bitmap indexes. How to decide proper value for sort_area_size?. I think my application is doing lot of I/Os when inserts/updates are performed on the column that has bitmap index. I tried with various values for sort_area_size with no improvement in the performance. The default sort_area_size is 5M and I tried by incrementing the value until it reached 100 MB. I could not determine the proper value for sort_area_size to have good performance. Reference to any documentation is also appreciated.
    Your help is very much appreciated.
    Thanks
    Suresh.

    Suresh,
    Please refer to the following documentation:
    http://otn.oracle.com/docs/products/oracle9i/doc_library/release2/server.920/a96533/memory.htm#39086
    Also, this seems to be a server technology question, please post in the server technology forum for more details.
    Regards:
    Igor
    Hi everyone,
    I am working on optmizing the data warehouse application that involves with inserts and updates on bitmap indexes. How to decide proper value for sort_area_size?. I think my application is doing lot of I/Os when inserts/updates are performed on the column that has bitmap index. I tried with various values for sort_area_size with no improvement in the performance. The default sort_area_size is 5M and I tried by incrementing the value until it reached 100 MB. I could not determine the proper value for sort_area_size to have good performance. Reference to any documentation is also appreciated.
    Your help is very much appreciated.
    Thanks
    Suresh.

  • Data Access Object for Data Warehouse?

    Hi,
    Does anyone know how the DAO pattern looks like when it is used for a data warehouse rather than a normal transactional database?
    Normally we have something like CustomerDAO or ProductDAO in the DAO pattern, but for data warehouse applications, JOINs are used and multiple tables are queried, for example, a query may contains data from the Customer, Product and Time table, what should the DAO class be named? CustomerProductTimeDAO?? Any difference in other parts of the pattern?
    Thanks in advance.
    SK

    In my opinion, there are no differences in the Data Access Object design pattern which have any thing to do with any characteristic of its implementation or the storage format of the data the pattern is designed to function with.
    The core pupose of the DAO design pattern is to encapsulate data access code and separate it from the business logic code of the application. A DAO implementation might vary from application to application. The design pattern does not specify any implementation details. A DAO implementation can be applied to group of XML data files, an Excel-based CSV file, a relational database, or an OS file system. The design is the same for all these, it is the implementation that varies.
    The core difference between an operational database and a strategic data warehouse is the purpose of why and how the data is used. It is not so much a technical difference. The relational design may vary however, there may be more tables amd ternary relationships in a data warehouse to support more fine-tuned queries; there may be less tables in a operational database to support insert/add efficiencies.
    The DAO implementation for a data warehouse would be based on the model of the databases. However the tables are set up, that is how the DAO is coded.

  • RAC for Data Warehouse

    Hello,
    We have a research project for restructructuring our data warehouse system.
    I would like to get some opinions about whether RAC architecture can be
    a good solution for Data Warehouse application.
    We have using parallel queries massively. Does running these kind of queries
    on different servers on RAC with multiple server result in performance degradation rather than
    running on single monolithic server with multiple CPUs
    I will appreciate any comments using RAC architecture for Data Warehouse
    systems?
    Regards,

    Maurice Muller wrote:
    Just keep in mind that during the last 4 years (I guess your current system is about 4 years old) the CPUs became much faster.
    A cpu can't work without data which means that the I throughput has to be fast enough to feed all your cores with data.
    The main bottleneck of all DWHs I have seen during the last 8 years was allways the IO never the CPUs. And not just data warehousing Maurice, but a basic principle for any data processing platform - the slowest layer is always the I/O layer.. and can be the most expensive one to solve too.
    Which is why newer technology like Infiniband is exciting as it can also serve as the I/O layer. Instead of using the traditional HBA which is typically configured with 2Gb fibre channels to the storage layer, using HCA cards you can wire this directly into an Infiniband storage array... and this can run at up to speeds of 40Gb. Dual connections means a total theoretical pipe size of 80Gb. I do not know of any other standard technology (like GigE) that can provide any similar bandwidth speed.
    Back to RAC though - with RAC, when you add a new server that comes with a new set of I/O pipes.. plus of course more RAM and more CPU cores. SMP server architecture does not scale like this at all. You only have x number of slots for PCI cards, CPUs and RAM. A very specific ceiling that cannot be moved. With MPP this ceiling is a a lot higher and more flexible.
    You can also replace a dual core dual CPU nodes with a 6 core AMD Istanbul CPUs next year.. and possibly 12 core CPUs year after that. So even a smallish 4 node cluster with 16 cores in total can be grown significantly and remain a 4 node cluster. Together with advances in HPC (High Performance Computing) like Infiniband.
    I'm not seeing much use of non-RAC RDBMS architecture in the future. Databases are getting ever bigger because we have the technology to crunch more data, and crunch it a lot more intelligently than ever before. My first production database was 4MB in size, and ran on a Novell File Server with two 20MB disks. I'm currently testing a 24TB array for use for a single database.
    Technology is inevitable, as is the growth in data volumes. And I cannot see a non-RAC architecture rising to that challenge. Especially not in something like data warehousing.

  • Reading the data from excel file which is in application server.

    Hi,
    Iam trying to read the data from excel file which is in application server.
    I tried using the function module ALSM_EXCEL_TO_INTERNAL_TABLE. But it didn't work.
    I tried just reading using open data set and read data set it is giving junk characters.
    Please suggest me if you have any solution.
    Best Regards,
    Brahma Reddy

    Hi Narendra,
    Please see the below code I have written
    OPEN DATASET pa_sfile for INPUT in text mode ENCODING  DEFAULT MESSAGE wf_mess.
    CHECK sy-subrc = 0.
    DO.
    READ DATASET pa_sfile INTO wf_string.
    IF sy-subrc <> 0.
    EXIT.
    else.
    split wf_string at wl_# into wf_field1 wf_field2 wa_upload-field3
    wa_upload-field4 wa_upload-field5 wa_upload-field6 wa_upload-field7 wa_upload-field8
    wa_upload-field9 wa_upload-field10 wa_upload-field11 wa_upload-field12 wa_upload-field13
    wa_upload-field14 wa_upload-field15 wa_upload-field16 wa_upload-field17 wa_upload-field18
    wa_upload-field19 wa_upload-field20 wa_upload-field21 wa_upload-field22 wa_upload-field23
    wa_upload-field24 wa_upload-field25 wa_upload-field26 wa_upload-field27 wa_upload-field28
    wa_upload-field29 wa_upload-field30 wa_upload-field31 wa_upload-field32 wa_upload-field33
    wa_upload-field34 wa_upload-field35 wa_upload-field36 .
    wa_upload-field1 = wf_field1.
    wa_upload-field2 = wf_field2.
    append wa_upload to int_upload.
    clear wa_upload.
    ENDIF.
    ENDDO.
    CLOSE DATASET pa_sfile.
    Please note Iam using ECC5.0 and it is not allowing me to declare wl_# as x as in your code.
    Also if Iam using text mode I should use extension encoding etc.( Where as not in your case).
    Please suggest me any other way.
    Thanks for your help,
    Brahma Reddy

  • Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?

    Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?
    How exactly it picks up a specific data value during Hourly aggregations and Daily aggregations?As in  How the value is chosen. Does it say averages out or simply picks  value at the start of the hour/day or end of the hour/day ??

    I'll try one more time. :)
    Views in the operations console are derived from data in the operational database. This is always raw data, and typically does not go back more than 7 days.
    Reports get data from the data warehouse. Unless you create a custom report that uses raw data, you will never see raw data in a report - Microsoft and probably all 3rd party vendors do not develop reports that fetch raw data.
    Reports use aggregated data - hourly and daily. The data is aggregated by min, max, and avg sample for that particular aggregation. If it's hourly data, then you will see the min, max, and avg for that entire hour. Same goes for daily - you will see the
    min, max, and avg data sample for that entire day.
    And to try clarifying even more, the values you see plotted on the report are avg samples. If you drill into the performance detail report, then you can see the min, max, and avg samples, as well as standard deviation (which is calculated based on these
    three values).
    Jonathan Almquist | SCOMskills, LLC (http://scomskills.com)

  • Essbase vs. Relational Data Warehouse (Which one is the fact table in DW)?

    Guys, thanks in advance for your feedback but below is a simple question i am trying to get feedback on. I am trying to compare an Essbase cube to Relational Data Warehouse containing the same set of information:
    Essbase Dimensions
    Time, Account, Product, Scenario (making it easy)
    Relational Data Warehouse
    Time (dim), Account (dim), Product (dim), Scenario (Fact table)
    OR
    Time (dim), Product (dim), Scenario (dim), Account (Fact Table)
    Which of the relational lines is correct? Is Account the fact table? or Scenario the fact table? Account will contain your usual P&L accounts. Scenario will contain your usual Actual, Budget and Forecast scenarios.
    Thanks,

    I am so not a DW guy, it's amazing, but I've never let little more than a brush with a product stop me from posting...
    Wouldn't all of your dimensions need to be in your fact table? How else would you join from the fact table to the dimensions?
    In either layout, wouldn't you have the keys for Product, Time, Scenario, Acount, and then data in the fact table?
    Or are you talking about the last dimension in your layouts be the columns? If that were the case (and I don't know that it is), I would guess that Scenario changes less, so it would be in columns, although I can definitely see that not being efficient as you are likely to pull all or some of the Accounts for a given time, product and scenario versus all of the scenarios for a given time, product, and account.
    I'm really curious about this as I am just the consumer of star schemas, never (thankfully, and obviously, given the above insane ramblings) the designer of them.
    Regards,
    Cameron Lackpour

  • Unread      Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Implementing heirarichal structure in data warehouse

    I want to create a data warehouse for credit card application. Each user can have a credit card and multiple supplementary credit cards. Each credit card has a main limit, which can be sub-divided into sub-limits to supplementary credit cards as requested by the user. Let us consider the following example:
    User “A” has a credit card “CC” with Limit “L” and its limit is $100,000.
    User “A” requested for a supplementary credit card “CC1” which is assigned limit
    “L1” = $50,000. He requests for another supplementary credit card “CC2” which is assigned limit “L2” = $100,000.
    Source tables contain data like this:
    1. src_client_card_trans: contains transaction data of client/user credit card usage (client_id, credit_card_number, balance_acquired)
    Client_id     Credit_card_number     Balance_acquired
    A     CC1     $20,000
    A     CC2     $50,000
    A     CC     $70,000
    2. src_card_limits: contains client’s credit cards linked to credit limits.
    Credit_card_number     Limit_id
    CC1     L1
    CC2     L2
    CC     L
    3. src_limit_structure: contains the relationship of limits and sub-limits.
    Limit_id     Sub_Limit_id
    L     L1
    L     L2
    I have designed two dimensions and one fact table. Dimensions are:
    1. LIMITS: contains the limit_id.
    2. CLIENTS: contains credit card user’s information.
    And fact table is LIMIT_BALANCES_FACT, which have some fact columns with the above dimensions.
    How can I implement the above scenario of limit hierarchy in data warehouse? Need your suggestions.
    Thanks in advance

    Much depends on how you want to analyze the data and there are a few options:
    1) Use credit limit as an attribute of the customer dimension. This would allow you to create query filters that can just show those customers with a $100,000 credit limit. This would return a list of credit cards (since the attribute would be assigned to each credit card) and then you can simply add or just keep the parents of that result set.
    However, this assumes you do not want to measure data specifically relating to credit card limit. For example it would not be possible to view a total amount spent by all customers who had a credit-limit of $100,000.
    In this case the attribute, credit limit, is simply used to filter a result set
    2) Create a separate dimension called Credit Limit and create three levels:
    All
    Range
    Credit Limit
    The level Range would contain groupings of credit limits such as 100-500, 501-1200, 1201-1,000 etc etc.
    This would allow you to analyse your data by customer and by credit limit over time. Allowing you to slice and dice quickly and easily.
    3) A second customer hierarchy could be added to the customer dimension. This would allow you to drill-down through different credit limits to customers to individual credit cards. It would be advisable to follow the same approach as option 2 and create some groupings for the credit limits to make the drill down easier for your business users to navigate:
    All
    Range
    Credit Limit
    Customer
    Credit Card
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Why do we need SSIS and star schema of Data Warehouse?

    If SSAS in MOLAP mode stores data, what is the application of SSIS and why do we need a Data Warehouse and the ETL process of SSIS?
    I have a SQL Server OLTP database. I am using SSIS to transfer my SQL Server data from OLTP database to a Data Warehouse database that contains fact and dimension tables.
    After that I want to create cubes using SSAS form Data Warehouse data.
    I know that MOLAP stores data. Do I need any Data warehouse with Fact and Dimension tables?
    Is not it better to avoid creating Data warehouse and create cubes directly from OLTP database?

    Another thing to note is data stored in transactional system may not always be in end user consumable format for ex. we may use bit fields/flags to represent some details in OLTP as storage required ius minimum but presenting them as is would not make any
    sense to user as they would not know what each bit value represents. In such cases we apply some transformations and convert data into useful information for users to understand. This is also in the warehouse so that information in warehouse can directly be
    used for reporting. Also in many cases the report will merge data from multiple source systems so merging it on the fly in report would be tedious and would have hit on report server. In comparison bringing them onto common layer (warehouse) and prebuilding
    aggregates would be benefitial for the report performance.
    I think (not sure) we join tables in SSAS queries and calculate aggregations in it.
    I think SSAS stores these values and joined tables and we do not need to evaluates those values again and this behavior is like a Data Warehouse.
    Is not it?
    So if I do not need historical data, Can I avoid creating Data Warehouse?
    On the backend SSAS uses queries only to extract the data
    B/w I was not explaining on SSAS. I was explaining on what happens inside datawarehouse  which is a relational database by itself. SSAS is used to built cube (OLAP structures) on top of datawarehouse. star schema is easier for defining relationships
    and buidling aggregations inside SSAS as its simple and requires minimal lookups to be performed. Also data would be held at lowest granularity level which can easily be aggregated to required levels inside OLAP cubes. Cube processing is very resource
    intensive and using OLTP system would really have a huge impact on processing performance as its nnot denormalized and also doing tranformation etc on the fly adds up to complexity. Precreating a layer (data warehouse) having data in required format would
    make cube processing easier and simpler as it has to just cross join tables and aggregate data based on relationships defined and level needed inside the cube.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • What Master Data to hold within a Data Warehouse

    Hi,
    We are developing a data warehouse which will incorporate Master Data entities. We have a pre-existing Master Data Management solution which will be the system of source for MD within the DW & associated Data Marts. We have decided to have a copy
    of the MD within the DW. However we are of two schools of thought on what data should be held.
    One school says that only those attributes that shapes a query result should be held in the DW-MD the second says that All MD that may be used by a reporting system should be held within the DW so that it is the single source of data for the reporting
    application. let me give an example. Let’s say we have a Customer MD entity with the following attributes
    Customer
    ID
    NAME
    COUNTRY *
    EMAIL ADDR
    CITY *
    PHONE NUM
    GENDER *
    LAST LOGGED IN
    FAX ADDR
    DATE OF JOINING *
          LOGO (binary)
    Now, we will never do a query based on phone number, fax or email address etc. But the attributes flagged by a
    * will shape queries when coupled with our facts. Such as "find all male customers with 4 or more transactions over $1000" or "find all customers registered from 2007 based in New York who have purchased an X". However when
    showing the results we will always show the full customer profile including Name, email address etc. (I realise the queries are very specific and not report queries as such but they suffice for the question at hand)
    The first schools says only the query shaping MD elements should form the MD within the DW and that the reporting application should obtain the remainder directly from the group MD system as required. The second school says that the DW (or the DM) should
    furnish all the MD required by the application. My question is which of the two approaches is considered best practice and as importantly, why?
    Cheers,
    Daryl

    Do you have ODS? NDS?...? or you just have Data warehouse as the only resource for covering reports and dashboards?
    if you do have only the Data Warehouse then you have to cover
    all report's requirements within the data warehouse, no matter it used in the filter/slicing dicing/or as display only field. So I would add those fields such as names, email address, phone number, as attributes in the data warehouse. But I will only
    consider indexing and data warehouse best practices for performance tuning for attributes that participate in slicing and dicing and filtering (such as country, year....).
    on the other hand; if you use SSAS multi-dimensional cube on top of your data warehouse, then you can set some attributes to be only visible (attributes for display only), and some of them to be visible and hierarchy enabled (attributes that participate
    in slicing and dicing and filtering).
    Regards,
    Reza
    SQL Server MVP
    Blog:  
    http://rad.pasfu.com  Twitter:
      LinkedIn:
    SQL Server Integration Services 2012 Tutorial Videos:
    http://www.radacad.com/CoursePlan.aspx?course=1

  • Should we be using RAC for a data warehouse?

    We have an Oracle 11.1 data warehouse system. We were having some performance issues with the system so we shutdown one of the RAC nodes, to see if that was causing the problem. The problem was slow updates on a table (all 30+ million rows on one table had to be fixed). One other perforamnce problem is queries of large partitoned tables (even if the partitioin key is used). Both bulk collect and bulk inserts are very fast.
    Question: for a 11.1 data warehouse system should we use RAC? Why?
    Thank you...

    a school of thought that suggests RAC potentially decreases system availability, rather than increasing it.RAC also has the potential of increasing availability. The potential "cuts both ways", so to speak.
    I've worked with non-RAC and RAC databases on a variety of platforms. My experience doesn't show evidence that RAC decreases availability. Given that most servers, even in non-HA clusters, are very reliable (generally), downtime is low in both non-RAC and RAC environments. However, RAC does provide an availability advantage -- protection against node outage. And there are environments which do require the avaialability of RAC. Not all applications require it. RAC is too oversold, not in terms of advantages but in terms of installations.
    the increased complexity and the increased risk of both software and human related errors in a RAC environmentI would say that a similar argument arises in DASD v SAN. A SAN is more complex. Human error on a SAN causes a much higher cost. Human error does occur on a SAN. However, no one rejects a SAN on these grounds alone.
    RAC is complex to implement. It requires more skills to adminster and diagnose. However, if it is setup well, it doesn't suffer outages. An outage from human error is the same as in a non-RAC environment.
    The issue isn't RAC. The issue is that too many customers buy RAC without evaluating seriously whether
    a. they need the additional minute increase in availability
    b. whether there applications are "RAC-aware" {TAF is still misunderstood}
    c. whether they have the skills
    RAC provides scalability. It also provides HA. Let me say that again : It also provides HA.
    I've seen a high end Failover Cluster environment where one of the "best" vendors in the world talked of a 10-30minute outage for the Failover.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com
    Edited by: Hemant K Chitale on May 31, 2009 11:41 PM

  • How to design a churn Data Mining application

    Dear All,
    I am a beginning software engineer. I am interested in designing a churn data mining applications for telecom companies.
    My questions are:
    1. Can I use the ODM sample codes to do this for creating the models,lifts, and scoring, etc?
    2. How do I attach Java Swing front-end applications to the data mining in the Oracle 9i database server ?
    3. Are the scoring posted in the sample codes accurate and useful in life deployment?
    4. On the data warehouse for the data mining, what are the special considerations for the warehouse vis-vis dimensions, fact tables, etc?
    Regards,
    Olatuja Abayomi

    Thanks :)
    However, what i mean is the implementation (java file).
    For example, the Java RMI tutorial at http://java.sun.com/docs/books/tutorial/rmi/overview.html defines two interfaces on server side.
    One interface is for the Remote Interface, which has to be on server side.
    Another is an interface with generic type, which is
    public interface Task<T> {
        T execute();
    } Then both copiled interfaces (two class files) are compressed into a jar file, and this jar file needs to be reachable by both client and server side of the code.
    So, client side needs the implementation of interface Task.
    Or after we defined the two interface (Remote interface and Task interface) on server side, we also implement the Task interface
    on server side, then compile the server side and compress compiled interfaces and implementation (three cless files) into a jar file, and pass this
    jar file to client, so server needs no implementation for the interface Task, since the implementation is defined and compressed in the jar file already.
    I hope i explained better this time, or i am just lost at this point.

  • PAS - Connection to Oracle data warehouse

    Hi,
    We are looking to connect to an Oracle data warehouse and have installed an Oracle client on the server.
    We created the link ID and are using the built in connection to Oracle OCI.DLL. However we get an error back saying that it could not load the OCI.DLL
    Any ideas?
    Colin Cooper

    Hi Robert,
    I'm confused Pedro. What is the NW Database Instance? Is this another installation of CE? Or is this the relational database running behind CE? More confusing is the fact that you say SQL Server is installed on the "Database Server". If SQL Server is on this machine, why is Oracle in the picture?
    Because of the system landscape this customer have, we use a distributed system instalation of NetWeaver (NW), that is composed by:
    - Central services instance (SCS)
    - Database instance (DB)
    - Primary application server instance
    The database instance is the one installed in the Database Server.
    Also MS SQL Server is installed in the same server, because it's used as sytem database for SSM (store all the metadata of SSM).
    Finnaly, Oracle is the database used by the Datawarehouse, the place we should connect to get all the data to load into PAS database. This is a different server.
    I understand that. My question is, does Oracle have a fix for this issue available? Have you checked metalink to see whether or not a recent Oracle patch set, or the most recent Oracle client, already has a fix for this issue?
    I did not check that. I will check with the customer what is the version currently used and if exists any fix this issue.
    I have a machine running Windows 2003 Server 64 with the 32bit Oracle client installed and have no issues with setting up an ODBC connection to Oracle. What I had to do though was ensured that I ran "C:\WINDOWS\SysWOW64\odbcad32.exe" and not the standard "%SystemRoot%\system32\odbcad32.exe" which is what the ODBC applet in control panel usually calls. Because you're running 32bit Oracle, you have to run the 32bit ODBC applet in order to successfully create the System DSN which can be called by PAS.
    That's exactly what I need. As soon I try this I give you a feedback.
    Thanks
    BR
    Pedro

  • Data warehouse problem plz help

    hi, i got a problem in making my first warehouse
    first of all
    i have many operational databases, and i want to make a warehouse to get the data of these db's to save them according to time...
    and how i connect a vb .net application to retrieve data and queries from the data warehouse
    is this possible and can any one help pleaaaaaaaassssssse

    Because of this our server gets shut down automatically No. Just because connection pool got suspended, server should not go down. There is any other issue which you did not notice. For Data Source to function properly, make sure that intial and maximum connections limit has been set appropriately (preferably both should be equal) and make sure that database is up and running always and has that many connections open. Check with the DBA for DB connection limit settings.
    Raise a SR with support if you are not able to figure out the exact issue.
    Regards,
    Anuj

Maybe you are looking for

  • IMac G4 crashes on start up

    I have an iMac G4 that is running 10.5.8 and recently it has started to crash on start up. When I turn it on I get the Apple logo with the spinning cog for about 30 seconds than the Get the "Press and hold power button to restart" screen. I don't hav

  • Charge off difference whening clearing customer open item with bank receipt

    Hi, Our company users will use F-32 to clear customer open item with bank receipt, sometimes, our invoice is 100 RMB issue to customer, the customer finally pay 99.98, then in F-32, we use charge off difference to post 0.02 difference to a account. T

  • ERROR: stopping on error 7777 during MAIN IMPORT

    Dear Expert, When i am applying Basis and ABAP Support pack 700-17 levels i am getting following red( In Bold) entries in the action log.    Import phase 'IMPORT_PROPER' (30.12.2008, 14:56:58)    Error during executing the tp command 'tp IMPORT all C

  • Duplicate header pricing in sales order change mode

    Hi, I have made a sales order with a price in header condition When saving the order, the order goes into credit control which I release via VKM3 and create an invoice. When I go back to sales order in change mode (VA02), I see the header condition t

  • One Server Multiple Clients

    Hello, This server code accepts only one client connection at a time. However, I have several lines that are specifically for the server to accept more than one client. What do I need to do in addition for the server to recognize that it can accept m