Staging Area with OWB 10.2 - necessary or not?

Hi to all,
I have read so much about Staging Area and OWB 10.2 that I am totally confused: Some documents and powerpoints in the web say you do not need one, others say you need one. The thing is I am planning a DWH and now I am not sure if a staging area is necessary or not, because the mappings do the ETL jobs internal so I am not sure about the staging area. Most of my data sources are Tables/Views/MViews in a database.
Thank you very much for any help concerning this question!
Regards
Thomas

Would you prefer the answer that you MAY need one? Then again, you may just WANT one!
For example, if you are building against a high transaction volume, busy 24/7 OLTP system then you may find that you need a local snapshot in order to do a complete build with a consistent set of source data for all your numbers to be consistent.
Then again, you also may just find that bringing over just delta data into a local snapshot makes for much more efficient load rather than running against huge full remote tables if they are not well partitioned and/or indexed.
Then again, complex joins run against a remote system may run more efficiently if you bring the data across with simple table dumps into a staging area that you can index to optimize your queries rather than have to deal with poor performance of complex joins over a dblink. Especially if you need to perform complex joins accross more than one db link to multiple source source systems. How big a cartesion product do you want bouncing around the network to perform that sort of scenario? Sure, maybe you can do it - but how much are you going to impact performance across the boards doing things like that?
Is the source system already stresed to the max and sitting on a vintage piece of equipment, but your shiny new DW environment is blessed with tons of resources that will make the ETL run faster by several factors if you first copy the data over locally?
So, do you need a staging area?
Fact is that there is no generic correct answer to this question.
You have to look at the specifics of your data requirements and your environment to answer that question. There are costs and benefits to having a staging area, and you have to determine which way the cost/benefit analysis comes out for your specific project.
Mike

Similar Messages

  • Business area with SAP support in SAP OSS notes #

    Dear all,
    I would like to search OSS documentation which mention that SAP does not support business area anymore? anyone knows the exact OSS number for thatstatement ?
    Because as far as I remember, SAP does not recommend business area. SAP advised customer to use PCA or new GL instead.
    Thank you for your kind help.
    Kind regards,

    Check OSS Note 321190. Here is one section:
    To meet the changing requirements, we will focus the further
    functional developments in Financial Accounting on the profit
    center entity. With General Ledger Accounting (new) in Release SAP
    ERP 2004, you can create financial statements for profit centers.
    See Note 756146 for more detailed information.
    o The business area will be retained in the present form. The data
    and functions will continue to be available. In the context of the
    use of classic General Ledger Accounting, business area accounting
    (Customizing OB65 for business area financial statements, SAPF181
    and SAPF180) will continue to be supported to the known extent;
    only a further development in the context of classic General Ledger
    Accounting is not planned.
    BG

  • Staging Area Schema

    My source and target table is same and I am trying to update certain column in table XXX, I have set staging area different then target and specify the staging schema where I do have the create table prevelige but when I run this interface its trying to create temp table in target schema where I do not have preveilege to create table.Any clue ...?
    In the interface I have checked all my mapping process is in staging area not source or target.
    Any clue ...?
    Thanks

    HI,
    It depends on the KM that you are using... If you are using a IKM Incremental Update then the process will create a temp table at target because it needs it to make the PK comparation.
    My suggestions to work around it:
    1) if your ETL allows, work with the IKM Control Append
    2) If you need of a Incremental Update ELT just point the "Work Area" at the target Physical Schema to the "Staging Area" Schema, change the Database connection user to the user of Staging Area and give him the necessary grants at the tables of the target schema (insert and update i imagine),
    Just to be sure, are you using Oracle DB as source, staging and target?

  • Problem with 10G R2 installation (OUI - 10133 Invalid Staging Area) on W2K3

    I'm getting a strange error installing 10G R2 Database on a Windows 2003 server. I get "OUI - 10133 Invalid Staging Area. There are no top level components for Windows NT, Windows 2000 available for installation in this staging area." This happens when I choose an Advanced installation or when I choose a starter database.
    The funny thing is that I've performed this installation with this exact installation disk on a virtual server on my laptop with Windows 2003.
    Also, I ran a quick test with the 10G R1 installation files and I don't get the error. I want to run tests on case insensitivity with LIKE searches (a new feature enabled in 10G R2) so I can't use 10G R1.

    I have seen that problem frequently and it is because for some reason the installer doesn't like burned CD's. Copy all the staging files over to disk. I usually download the zip files onto a staging drive on the server, unzip, install, then delete the install files when I'm done if I need the space.
    This has two major benefits: I don't run into the "staging area" error and the install only takes about 30 minutes compared to about 3 hours when installing from CD.

  • Maintaining the structure of a SQL Server staging area and dwh aligned with the Oracle data source

    Hi,
    I'm working in a context where the data source system, in Oracle, is a continuos work in progress. Each 1-2-3 weeks the data source system in the prod environment is updated with new tables or updated table (with new columns or altered columns). So, it is
    important to apply in a rapid manner the related changes in the data structure of the SQL Server staging area and dwh.
    The issues to solve are:
    a. maintaining SQL Server data structure of the staging area aligned with the data structure of the Oracle data source;
    b. maintaing SQL Server data structure of the staging area aligned with the data structure of the SQL Server staging area.
    In order to solve these issues it could be useful to think to an authomatic manner to alert when a data structure change occurs and a simple manner to apply the changes on SQL Server data structure.
    Any suggests, please? Many thanks

    We use Oracle CDC service in SQLServer. It has a flag to indicate the schema changes happening at the Oracle end. We track the schema changes using it and then apply changes to sqlserver side. And regarding automation you can use Biml as suggested or .NET
    scripts inside script task.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • How to print staging areas and doors on the pick doc. ?

    Dear experts!
    Thank you for your attention!
    In our system, we use Lean WM.
    We know staging areas and doors can aslo be printed on the pick doc.  How to do that?
    Best regard!
    Tangdark

    I'm not a WM expert, but am pretty sure nothing special is required for that. If the fields, where this information is stored, are used by the output processing program and the data is passed to the form and then printed by the form, there should be no problem. If data is there, but it's not printed, then you might need to adjust the output program and/or form.
    You might want at least to try it first and use Search whenever possible, then come back with more specific questions, if necessary.

  • E$ table to be created in STAGING Area

    Hello All,
    I have an interface that has Staging Area different from TARGET. I have created an ODI Constraint in the target table in my model. Currently, when I run the scenario, the E$ table is created in the target database.
    Because we are planning to use E$ for the tracking purpose for which the INtegration team would have access.
    1) How can I force the E$ table be created in the ODI STAGING AREA (Database) rather than creating at the target database ?
    2) What is the use of RECYCLING ERROR in IKM ?
    Kindly clarify.
    Thanks

    Hi Nag,
    Unfortunately a huge KM modification is necessary to accomplish this or divide in more than one interface (process). Let me try to explain:
    An IKM Incremental Update create the I$ at target even when the stage area is another Logical Schema because it needs the target table to "decide" what records are to update and what are to insert.
    Because of that, the internal the ODI algorithmic set the connections to the target once the I$ will be created there anyway.
    To accomplish what you need it will be necessary to change the way that this kind of ETL is done and force the connections to the new staging area.
    That is possible but will take some time.
    An alternative solution, by process, could be;
    1) make a copy of the target at staging area (if possible, I mean, if the amount of records allows it)
    2) add 2 columns to this copy to flag the kind of dml (flg_ins, flg_upd for instance)
    3) copy all desired constraints to the temporary table
    4) use the same original interface but change the target to the temporary table.
    5) at mapping check just "insert" to column flg_ins (value "Y" for instance)
    6) at mapping check just "update" to column flg_upd (value "Y" for instance)
    7) execute the interface
    8) create a procedure with 2 steps:
    9) step 1 - tab source: select all records from temporary table where flag_ins = 'Y'
    - tab target; insert into original target
    10)step 2 - tab source: select all records from temporary table where flag_upd = 'Y'
    - tab target; update into original target
    for the itens 9 and 10 will be necessary to choose the technology and respective Logical Schema.
    If the amount of records at target doesn't allow this technique than:
    Or will be necessary change the IKM
    Or work with DBLINK from target to staging if both technologies are Oracle and to create a dblink will be possible. Any way IKM customization will be necessary, but a small one.
    Does it make any sense to you?

  • Why cann't i see the cube that i create with owb    ??

    hello:
    owb version:10.1.0.4
    db version:10.2.1.0.1
    Business Intelligence 10.1.2.0.2
    windows 2003
    i use owb create 3 dimensions and 1 cube,and deploy on target schema successfully.
    and export the metadata to eex file, and import eex file into discoverer administrator ,when create workbook i can see the cube created with owb;
    then i selected ORACLE BI DISCOVERER option to connect target_schema,so i can see my cube and dimension;
    but when i selected ORACLE BI DISCOVERER FOR OLAP option to connect target_schema, login in successful , but when i create my workbook ,cann't see my cube and demension
    why for this,and how to do it,so i can see them?
    pls help me!

    There are three ways to deploy a multi-dimensional model and the final model selection depends on the database options you have purchased and they tools you have at your displosal:
    relational access - You can define relational cubes and dimensions within OWB and deploy these to your target schema. The dimensions are implemented as query re-write objects and can be viewed within the view USER_DIMENSIONS. The data relating to the dimension is stored within a simple relational table, either as a snowflake or star schema format. The cube object does not really exist as such but is implemented as fact table. Typically, access to this type of implementation is via Discoverer and you need to create a Discoverer EUL to view the deployed tables. Within Discoverer you will not see the dimension or cubes listed but you will see tables and/or views relating to your deployed schema.
    The Discoverer Bridge will create the eex/EUL file to install the schema within Discoverer Administrator. However, you must use Discoverer Plus/Viewer with the relational connection details.
    As an FYI the management of Discoverer EULs is integrated within OWB 10gR2.
    OLAP models - OWB also supports the deployment a schema as either a relational OLAP model or within an analytic workspace (this creates a MOLAP model). The OLAP deployment can be viewed only using Discoverer OLAP, however, management of the schema does not require an EUL and the administration is managed by Discoverer Administrator. All the administration is managed by OWB. To access a multi-dimensional model using Discoverer OLAP you must first run the OLAP bridge from within OWB. This will deploy all the required metadata for Discoverer OLAP to connect to your schema.
    However, I would recommend you urgently upgrade to OWB10gR2 as this provides additional tuning options for the OLAP data model that are not present in your current version of OWB 10.1.0.4. Unfortunately, with your current OWB version building an OLAP model is a two stage process, a form of data staging. You need to create a relational target schema first and then replicate that target schema within an OLAP model. OWB10gR2 allows you to map data from a source object directly to a multi-dimensional object with no need to stage the data.
    If you cannot upgrade at this point my suggestion would be to use OWB to design the target relational schema and then use Analytic Workspace Manager (AWM) to design your OLAP schema for Discoverer OLAP access. AWM will provide you with all the tools for tuning and managing your OLAP model within a 10gR2 database schema.
    Once you have upgraded to OWB10gR2 you can stop using AWM as all the tuning features are within OWB.
    Hope this makes sense and is helpful.
    Keith

  • Load SQL Server data with OWB 11.2

    Hi,
    We are using OWB 11.2 on Oracle Exadata Database Machine X2-2.
    We want to load data from multiple SQL Server tables in our DWH on Oracle 11.2 with OWB 11.2
    We want to load aprox. 100 million records a day.
    I've read some articles about this and the advise was to dump the data from SQL Server to files and load the files with OWB.
    We've tried to make a connection to SQL Server with OWB, this only works partially.
    In the OWB client I can import the database objects and sample the data from the SQL Server tables (all done on MS Windows client).
    From the server I am not able to run a very basic mapping which has a SQL Server source table and a Oracle target table without any difficult transformations.
    Is there another method/best practice to get the data over?
    I've read something about a database link from Oracle to SQL Server, but I don't know the details.
    Is there a tool that runs under Linux that can export data from SQL Server to a text file?
    Regards,
    Emile

    Hi Emile,
    regading extracting data from MSSQL with OWB on Unix platform (using Generic Connectivity):
    Re: SQLServer access from AIX Warehouse builder
    Re: OWB on Solaris Connectivity with SQL SERVER on Windows
    We want to load aprox. 100 million records a day.
    I've read some articles about this and the advise was to dump the data from SQL Server to files and load the files with OWB.100 million records per day is not a problem for daily extracting from MSSQL Server if you have 1-2 hour.
    In my opinion dumping to text file is a bad practice and is unnecessary if customer don't have special requirements (for example for security reason).
    SQL Server source table and a Oracle target table without any difficult transformationsIn my opninion the best way process data from MSSQL is to extract data to staging area (schema) on Oracle DB with mappings as simple as possible (ONLY filters, without any join), and most of data processing prefom in Staging area or during moving from staging to DWH.
    Also look at OWB user guide (how to use Generic Connectivity in OWB)
    http://download.oracle.com/docs/cd/E11882_01/owb.112/e10582/loading_ms_data.htm#i1064950
    Regards,
    Oleg
    Edited by: added link to OWB doc with description of using Generic Connectivity

  • How to Compare Data length of staging table with base table definition

    Hi,
    I've two tables :staging table and base table.
    I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
    so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
    cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
    But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
    Can anyone tell me how to get desired results?
    Thanks,
    Mahender.

    This is a shot in the dark but, take a look at this example below:
    SQL> DROP TABLE STAGING;
    Table dropped.
    SQL> DROP TABLE BASE;
    Table dropped.
    SQL> CREATE TABLE STAGING
      2  (
      3          ID              NUMBER
      4  ,       A               VARCHAR2(40)
      5  ,       B               VARCHAR2(40)
      6  ,       ERR_LENGTH      VARCHAR2(1)
      7  );
    Table created.
    SQL> CREATE TABLE BASE
      2  (
      3          ID      NUMBER
      4  ,       A       VARCHAR2(25)
      5  ,       B       VARCHAR2(25)
      6  );
    Table created.
    SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXX
    SQL> UPDATE  STAGING ST
      2  SET     ERR_LENGTH = 'Y'
      3  WHERE   EXISTS
      4          (
      5                  WITH    columns_in_staging AS
      6                  (
      7                          /* Retrieve all the columns names for the staging table with the exception of the primary key column
      8                           * and order them alphabetically.
      9                           */
    10                          SELECT  COLUMN_NAME
    11                          ,       ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
    12                          FROM    ALL_TAB_COLUMNS
    13                          WHERE   TABLE_NAME='STAGING'
    14                          AND     COLUMN_NAME != 'ID'
    15                          ORDER BY 1
    16                  ),      staging_unpivot AS
    17                  (
    18                          /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
    19                           * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
    20                           * in the same order as the ROW_NUMBER() function in columns_in_staging
    21                           */
    22                          SELECT  ID
    23                          ,       COLUMN_NAME
    24                          ,       DECODE
    25                                  (
    26                                          RN
    27                                  ,       1,A
    28                                  ,       2,B
    29                                  )  AS VAL
    30                          FROM            STAGING
    31                          CROSS JOIN      COLUMNS_IN_STAGING
    32                  )
    33                  /*      Only return IDs for records that have at least one column value that exceeds the length. */
    34                  SELECT  ID
    35                  FROM
    36                  (
    37                          /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
    38                           * the check to see if there are any differences in the length if so set a flag.
    39                           */
    40                          SELECT  STAGING_UNPIVOT.ID
    41                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
    42                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
    43                          FROM    STAGING_UNPIVOT
    44                          JOIN    ALL_TAB_COLUMNS ATC     ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
    45                          WHERE   ATC.TABLE_NAME='BASE'
    46                  )       A
    47                  WHERE   COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
    48                  AND     ST.ID = A.ID
    49          )
    50  /
    2 rows updated.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX                Y
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX               Y
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
    This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
    Thanks!

  • Load .csv file data with OWb Process flow using Web

    Hi,
    I Have a file in my local machine( Machines on multiple user's), need to load data through Web user interface.
    Let's say have a web page with multiple radio buttons respective to different sources, by clicking on each button will pass the path of .csv file to through Application, (API or Java programming interface) execute owb Process flow as a accepting file path as a input parameter to execute for loading purpose.
    Should facilitate view data, Update data through web based on user requests.
    Need your guidence how can i implement this with OWb 11g R2.
    Assuming with Web browser functionality. Please confirm it and if yes, please throw some light how could be the steps to implement.
    Thanks

    Hi David,
    Thanks for your reply.
    Undersatnd your proposed solution.But my requirement should be as follows.
    1. Currently under consideration using web page likely to be implement with Java, allowing users to load .csv file data into staging area.(Loading flat file into Data abse table)
    Case 1, Assuming OWB software is not installed on user machine. I think no.
    Is it possible through web page (this case Java page) to trigger java procedure/Pl/SQl procedure or integration of both to laod data into staging area.If yes, how it could effect performance of data load with 1 GB file.
    Case 2, OWb client software installed on User machine, while runtime passing parameters means passing manually?
    In case it is automated, how should i pass machine name & Path to owb runtime web browser.
    Could you please show me guidence how should I acheive this functionality with APEX customization part?
    Thanks agin for your support.
    Anil

  • Cant compile proc with OWB created DB Links - appear to loopback ?!!

    We're on Oracle 9.2 with OWB 9.2 and have the following scenario:
    We have two databases: DB1, DB2
    Both databases have a schema STAGING
    Both schemas have a table ADDRESS.
    The ADDRESS table on DB1 has columns CITY, STATE, ZIP while the ADDRESS table on DB2 just has CITY, STATE
    We have a stored proc on DB2 that is trying to select the ZIP field on the DB1 across a database link called DB1@SERVER1. The exact code is:
    select zip into v_zip from address@db1@server1;
    The compiler fails because with the error that ZIP does not exist. If I change the column to STATE it compiles successfully, leading me to believe the compiler is verifying the code against the local db and not across the link....
    Furthermore, if I run the offending sql via a sqlplus it crosses the link and works!... it appears to only be affecting compile time.
    I believe this has something to do with loopback links, but I haven't been able to solve it...
    Can anyone explain this behavior?!
    Any help greatly appreciated...
    -D

    Hello, Dan
    Actually I don’t understand why OWB generates so strange name for db link. After your post I’ve revisited my projects to see if there is any inconsistencies in generated db link name within mapping PLSQL code and db link name itself.
    There is one at least for me – in PLSQL db link’s name is like “<db_instance_name>”@“owb_location_name”, but in DBLINKS db link name looks like:
    db_instance_name.db_domain@owb_location_name. I must say it’s quite a trick for me how Oracle matches this two names, considering quotes and db_domain. Anyway, it works fine for me.
    By the way, two databases of yours are result of “cloning” one of them? If you are sure that real symptom is db link loop back – look at Net configurations.
    Try to have a look at DBLINKS – maybe you’ll find some mismatches. As a try – drop created db links and try to redeploy them, make additional tests via sqlplus, verify db_domain for both databases, check sqlnet.ora/tnsnames.ora on both databases.
    Serhit.

  • Oracle 8.1.5 Invalid Staging Area Error

    I get an error message: Invalid staging area. There is no top level components for Windows NT available for installing
    from this staging area. I installed this version successfully running win98, but i get this error with XP. I tried renaming the symcjit.dll
    as suggested in some of the earlier posting with no success. Any help is appreciated.
    Thanks,
    Will

    copy disk to hardrive in a temporary directory. go to properties for setup.exe (right click on file used to install oracle). Select the compatiblity tab. make sure the check box "run this program in compatibility mode" is checked. In combo box select windows 98.

  • Mappings out of synch with OWB repository

    Hi,
    I have a global problem with my repository, every mapping reports validation errors (many) advising to reconcile inbound or outbound.
    First question is how could this happen on such a global scale (affects every mapping) ?
    Second is how to fix, when I reconcile inbound a) All the links to columns are lost and I have to manually re-attach which is very risky let alone tedious b) After reconcile inbound some tables dont match what is in the repository (columns missing). The only way around this was to delete and drag the table back into the mapping and re-attach columns.
    Third question is, since the prod version of these mappings are running successfully if I deploy a mapping that is out of synch with the repository will it still function as it should ?
    Any help appreciated.
    Cheers,
    Brandon

    I think the problem you are having is that your mapping objects are not reconciled to the repository objects. It is purely a logical OWB problem of matching an mapping operator to an OWB object. It should not effect your busines logic or code that is generated by OWB.
    The warnings are just that, warnings. If you were missing a connection between objects, you'd get an error.
    Even though an operator in your map exists as an object in your repository, they are not properly bound. If your reconcile inbound each object (match by UOID and by name), you should stop getting the warnings.
    This is one of those quirks with OWB (at least with 9.2.0.8 that I am using). I noticed that if I import a mapping from another repository, the operators become unbound. I've given up trying to fix them each time. I just ignore the warnings.

  • Staging Area - Sans Transformations?

    I am struggling to come up with a proper term for a database that will contain untransformed application data.  This would be an EL process what will pull application data from the source, and insert it into the "staging" area.  A
    subsequent job will perform the ETL process into the data warehouse.  The goal for this area is to have unaltered application data to use as the source for our ETL processes, where we will not have to deal with the retention policies associated with the
    actual application data.   

    Often Staging in a data warehouse is permanent and long-term.  If you purge the Staging area you have to make sure that all needed information is making it into the Data Warehouse.  If you leave the data in Staging it's available for validation
    and to support design changes in the Data Warehouse.
    Also I'm seeing more direct access to this raw area of the DW, as "Big Data" workflows are gaining popularity and complementing more traditional approaches to analytics.  Broadly, in "Big Data" the analyst/end-users get access
    to raw source data quickly, and then do the transforms and reporting together as part of an ad-hoc analysis.
    David
    David http://blogs.msdn.com/b/dbrowne/

Maybe you are looking for

  • No system restore support for Windows 8.1?

    Last week I purchased a Satellite E45t-A4300.  It came with Windows 8.  Soon after booting it up I received a message from Microsoft saying that I could update to Windows 8.1 for free.  So, I did.   The next day I called Toshiba's customer support in

  • Flash 9 Alpha ate my help files

    Just installed the Flash 9 taster (as it were) to explore the delights of as3 - boomshanka - no helpfiles, have not nosed around yet to see what else is ary Anyone else had good or bad experiences with this update??

  • Deploying Applications in different folders

    Hi, Can we deploy the applications of different Managed Servers in different folders in "applications" directory? If it is possible how do we do? This is our current configuration: WebLogic Server 6.0 Sp1 in Solaris 2.6 Regards Siva.

  • Can i legally use screen captures of final cut in online tutorials?

    hello,     I want to know if I will be in any trouble for using video screen captures of final cut pro in my online tutorials.

  • Need JPopupMenu to work with several JLists

    Is it possible to do the following ? I want to write a mouselistner that will be used by a popupmenu. inside the listner i want to determine which component was used to spawn the popup menu and call a method on that component.