Warehouse loading

Hi Guys,
Fairly new to this streams thing so hoping for some sound advice from any gurus out there.
We're looking to load two schemas from a production DB (9.2.0.8) into a data warehouse on 10.2.0.1 using streams, but have a few questions that may save me plenty of heartache.
Is it really possible to use downstream capture between 9.2.0.8 and 10.2.0.1? Any particular issues we'd be creating for ourselves?
Is it advisable to run the destination in archivelog mode, bearing in mind the next question....
In the event of a catastrophic failure at the warehouse end, how would I go about recovering and 'catching up' with transactions in the production DB?
Any advice regarding implementing the above planned setup would be gratefully received and appreciated. Thanks.

Hi,
See the answers below:
--Is it really possible to use downstream capture between 9.2.0.8 and 10.2.0.1? Any
--particular issues we'd be creating for ourselves?
No, it is not supported. From documentation:
"Operational Requirements for Downstream Capture
The following are operational requirements for using downstream capture:
The source database must be running at least Oracle Database 10g and the downstream capture database must be running the same release of Oracle as the source database or later."
Ref:Oracle® Streams Concepts and Administration 10g Release 2 (10.2) Chapter 2
--Is it advisable to run the destination in archivelog mode, bearing in mind the next
--question....
--In the event of a catastrophic failure at the warehouse end, how would I go about
--recovering and 'catching up' with transactions in the production DB?
In downstream capture, you can always 'replay' the capture as long as you periodically run DBMS_CAPTURE_ADM.BUILD procedure to extract the data dictionary to the redo log on the source database.
I run this build twice a week, so in the event of losing the database, I can re-create the capture and extract the data I lost. Of course, you need to have a well documented and proven procedure to do this. Also, you need to keep the archived logs needed to do this.
--Any advice regarding implementing the above planned setup would be gratefully
--received and appreciated.
Since you cannot run downstream capture from 9i to 10g, you would have to implement a local capture at the source database(s) and then propagate the changes to the 10.2.0.1 database.
Aldo

Similar Messages

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Data warehouse Loader did not write the data

    Hi,
    I need to know which products are the most searched, I know the tables responsible for storing this information
    and are ARF_QUERY ARF_QUESTION. I already have the Data Warehouse module loader running, if anyone knows
    why the data warehouse loader did not write the data in the database, I thank you.
    Thank.

    I have configured the DataWarehouse Loader and its components.Even I have enabled the logging mechanism.
    I can manually pass the log files into queue and then populate the data into Data Warehouse database through scheduling.
    The log file data is populated into this queue through JMS message processing" and should be automated.I am unable to
    configure this.
    Which method is responsible for adding the log file data into loader queue and how to automate this.

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • DTW Warehouse load

    In 2007A USA localization, we are trying to load warehouses using the oWarehouse object and get the message "One of the inventory accounts is missing 'Allocation Account'".  However neither the template nor the Maintain Interface contains a column for 'Allocation Account'.  We have tried "PurchaseAccount", "PurchaseOffsetAccount" and "PurchaseVarianceAccount" with no luck.
    Does anybody have a template that works, or can you provide a mapping between the Warehouse Accounting tab and the oWarehouse columns?
    Thanks, Jeff

    It can be a little maddening. The nomenclature for error messages in DTW is inconsistent. One thing that helps is to cross-ref the field name in the error message with the description column for the table in the SDK.
    In SDK, lookup warehouse table. It is OWHS. Look in description column for 'Allocation Account'. It is description for field 'TransferAc'. Closest match in template is 'TransfersAcc'.
    Fun, huh?

  • Sharepoint 2013 and SSRS how to send reports on date schedule after dw load completes

    Certainly with subscriptions we can generate a SSRS report on a schedule like say every Monday morning at 5 AM PT.   My problem is I want to run those reports but I want to make sure the Datawarehouse completed its load first.  Example if
    for some reason the DW breaks at 4 AM and does not finish the load the Reports should not run.  ONce the DW is finished then reports should run.  The 5 AM is really a place holder for 1st attempt to send reports.  It should keep trying until
    it can send them or Tuesday comes around.
    the only approach I can think of is via the DW with a job and stored procedure you could have it exec anything you want.  Is it possible to exec the reporting services report from sql?  Is there a way from within sharepoint?
    Ken Craig

    Hi Ken,
    According to your you want to fire the SQL Server Reporting Services after the Data Warehouse load data completed, right?
    By default, when a subscription is created, a corresponding SQL Server agent job is created meanwhile. The SQL Server agent job has the same schedule with the shared schedule or report-specific schedule that is using by the subscription.
    The corresponding SQL Server agent job calls stored procedure AddEvent to add an event in SSRS. Then the SSRS notification service fetches the event from SSRS database to deliver the subscription.
    That, we can configure regular shared schedule or report-specific schedule based on the irregular schedule. So in your scenario, you can configure the job steps to fire the subscription after the Data Warehouse load data completed. For the details, please
    refer to the links below.
    http://social.msdn.microsoft.com/Forums/en-US/32bc6d2d-5baa-4e27-9267-96a4bb90d5ec/forum-faq-how-to-configure-an-irregular-schedule-for-report-subscription?forum=sqlreportingservices
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • Financial Analytics Implementation

    Hi guys
    I am looking for generic implementation plans for Financial Analytics to validate the assumption the product can be installed in around 3 months.
    Any assistance would be greatly appreciated.

    This is one of the plan (partial), but hopefully you may have to change to suit to your needs.
    Plan and Manage
    - Establish project objectives and scope
    - Prepare for the kickoff design session
    Gather Business Requirements
    Install and Configure Software
    Install and Configure OBIEE Software
    Test the environment
    Install client software on developer laptops
    Implement and configure Prebuilt BI Applications
    -Configure common components of enterprise warehouse
    -Setup Security Model and Access Control
    -Create Metadata Schema's for DAC and Informatica
    -Import Metadata into DAC and Informatica Schema's
    Configure DAC Server
    - Verify all dependancies of jobs are loaded
    - Disable jobs not required
    - Unit test DAC
    Configure Informatica Server
    - Verify all required mappings are loaded
    - Verify all connections are established
    - Unit test mappings and make necessary adjustments
    - Create BAW Enterprise warehouse
    - Load and Verify metadata
    Configure Out-Of-Box (OOB) AR, AP, and GL Reports
    -Load/Test subset of data into enterprise warehouse from EBS
    -Assess effort to modify GAP
    -Design Modifications
    -Test the new ETL processes for accuracy and efficiency
    -Load and test
    Performance tune the Load and Reports
    Reports
    -Select/Modify Desired AR and AP Dashboard reports
    -Validate that AR and AP data is correct in the OOB reports
    -Review with client team and select reports useful
    -Remove unwanted reports and make modifications per client
    -Test the Dashbaord reports
    -Obtain client signoff on the AR and AP reports and dashbaords
    Deploy Data Warehouse Solution
    -Migrate solution to Production and Validate the process
    -Set up OBIEE Dashboard security
    -Schedule data warehouse refresh processes
    -Hold a pre-deployment demonstration for all beta users
    -Train Beta users on OLAP/report usage
    -Train I.T. personnel on deployment process
    -Contingeny time for unexpected tasks
    -Go live with initial group of users
    -Post Implementation Support

  • Allow for timezone shifting for Global Reporting.

    Hi Everyone,
    I would like to enable end users to be able to dynamically shift the timezone that data is being reported on. Before I go into the technical details, let me outline the business case.
    Our Setup:
    -- We are a global company and we have OBIEE users in many different timezones around the world.
    -- But our Order Management system (Oracle EBS) processes every order in EST.
    -- We pull the data into a warehouse and store the dates and times in EST.
    The Issue:
    -- Because of the timezone differences, it is difficult to understand Sales per day and especially per hour.
    -- For example, Japan users are 14 hours ahead of us on EST right now, so their Saturday noon sales would show up as 10PM EST on Friday which is an odd hour to see a spike in a sales report.
    I want to have a way to shift the timestamp, and date if necessary, dynamically based on a user selected prompt. This way the sales make sense for the local reporting team.
    One of the ideas I came up with was having the warehouse loaded with multiple timestamp and date columns, but that seems like a brute force method because we'd need to add and maintain a bunch of columns for each date we want to do this to.
    I'd prefer something more elegant than that method.
    Does OBIEE 10g or 11g have features that will shift times and dates like this? Does anyone have any tricks for this?
    Thanks!
    -=Joe
    Edited by: user9208525 on Oct 20, 2011 1:06 PM

    Hi
    The Personnel Area and Subarea are defined by country specific. If there is a same Personnel Subarea name for the different countries, from my point of view we have to present the personnal area for corresponding personnel Subarea which you need to address it.
    Cheers
    Jun

  • Search in txt file using PL/SQL

    Is there any way to search for a string in a text file using a PL/SQL block??
    Thnks in advance.
    Ashish

    Richard:
    It would depend on the nature of the text file, but it could be done for most plain text files. If there is some consistent delimiter, that is, the text file is fielded in some way, you could define an external table with the appropriate structure then use normal SQL queries against the particular fields from the file.
    If the file is just plain text with no fields, you could create an external table something like:
    CREATE TABLE myexternal (
       line VARCHAR2(4000),
    ORGANIZATION EXTERNAL
    (TYPE oracle_loader
    DEFAULT DIRECTORY mydirectory
    ACCESS PARAMETERS
    (RECORDS DELIMITED BY newline
      BADFILE 'textfile.bad'
      DISCARDFILE 'textfile.dis'
      LOGFILE 'textfile.log'
      (line CHAR(4000) NULLIF line = blanks))
        LOCATION ('textfile.txt'))
    REJECT LIMIT UNLIMITEDThen use normal sql to search the contents of that file.
    I have not done any benchmarks on this, but my gut feel is that it would be significantly faster than using utl_file to loop over the lines looking for specific content.
    In one of our warehouse load programs, we process a text file that is typically about 7Mb in under 30 seconds.
    John

  • Exp function and numberic overflow ORA-01426

    I've got a problem that has been driving me slightly mad over the past few days. I have been trying to code some actuarial functions that I don't really understand for a data warehouse load on Oracle 9.2.0.6
    I am trying to use exp() and power() functions in an insert statement. There is some slightly dodgy data in the source table that is causing a numeric overflow to occur for some rows so I have written a function which calculates the values and returns the result. If an exception occurs, this will be caught and the result set to 0.
    The problem I am finding is that within the function, the power() function will successully raise a value_error exception when a problem occurs. However, the exp() function will not. When calling exp() with a large number as the parameter, it will simply return 1e126, which will subsequently cause a numeric overflow when calculating the value for another column.
    However, the number overflow will be raised if I use select exp(...) into ... from dual
    Is this just a weird inconsistency between the SQL and PL/SQL engine. I thought these problems had been resolved with Oracle 9 but I guess not.
    I think I can probably work-around the problem for here, just looking for confirmation that my thoughts are correct.

    I hadn't realised that 9.2.0.8 was the most recent patch set for Oracle 9i or that we were a couple of patch sets behind. Maybe I'll have to chase that up but I'm sure we've got a reason for not patching. Whether that would be a good reason or not is probably debateable.
    An upgrade to 10g or 11i is in the pipeline but it is currently moving at glacial speed.

  • Aggrigates

    Hi Experts,
    I have some doubts,
    *Can we create Aggregates on Hierarchies?
    *What is Match Box in Business Content?
    *What is Slowly changing Dimension?
    *What is Navigational Attribute in ODS?
    *Will Navigational Attribute affect the Query performance?
    Regards,
    Siva

    Hi
    An aggregate is a rollup of fact data where a total value is sufficient and no detailed information is needed.
    So aggregates are like InfoCubes except that they summarize or aggregate data from an InfoCube.
    When you use an aggregate, the summarization it represents does not need to be done during
    runtime.
    Aggregate functions happen in the background. They are not visible to the end-user.
    Aggregates can be created: are not possible for:
    For BasicCubes
    On dimension characteristics
    On navigational attributes
    On hierarchy levels
    Aggregates Cannot be created on
    Multiprovider
    RemoteCube
    Ods Object
    Infoset
    When we need to use aggregates:
    As a rule of thumb, an aggregate is reasonable and may be created if,
    Aggregation ratio >10 i.e, 10 times more records are read than are displayed ( Aggregation ratio = Number of records read from DB / Number of records transferred)
    AND
    Percentage of DB Time > 30% ie; the time spent on database is a substantial part of the whole query runtime
    I hope for creating aggregates i think the thumrule should be successful... But where can we check that Aggregation ratio > 10% and Percentage DB time > 30%.
    AGGREGATES
    There are three methods for viewing this data.
    1. You can see in table RSDDSTAT.
    2. By using t-code ST03
    ( Expert mode -> BW system load -> last minute's load)
    3. By implementing BW Statistics.
    (AWB -> Tools -> BW statistics for info cubes
    o: OLAP (front end)
    w: Warehouse (loading)
    -> Delete specify from date, to date
    -> select cube
    -> execute )
    You will have the following fields
    QDBSL - No. of records selected
    QDBTRANS - No. of records transferred.
    QTIMEOLAP - OLAP time
    QTIMEDB - DB time
    QTIMECLIENT - Front end time
    If QDBSL/QDBTRANS RATIO > 10 AND DBTIME >30 we will use aggregates.
    Why we need to use this
    Rollup -Moving of data from cube to aggregate
    If you have aggregates on the Cube ,data will not be available for reporting until & unless you roll up the data in aggrgates .
    If you use navigational attribute in the report you can drilldown to the lowest details. You can find whether the attribute is display or navigational in the tab "Attributes" of the InfoObject screen.
    Navigational attributes are part of the “extended star schema”. Navigational attributes require additional table joins at runtime (in comparison to dimension characteristics) so having an impact on performance.
    – but usually the decision for dimension characteristics or navigational attributes is based on business requirements rather than on performance considerations.
    Navigation attributes are joins that need to be performed at query execution. There is no limit that I know of for navigational attributes but for large queries , it is desirable to minimize navigational attributes.
    Joins at runtime for query generation are very expensive and should be minimized.
    Check the following link to get an idea of nav attribute.
    http://help.sap.com/erp2005_ehp_03/helpdata/EN/2e/caceae8dd08e48a63c2da60c8ffa5e/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/80/1a63e7e07211d2acb80000e829fbfe/frameset.htm
    hope it helps
    Regards
    gaurav

  • What is the main purpose of aggregates

    hi friends,
    I need clarification on Aggregates.
    => why we create aggregates
    => What will happend in background after creation of aggregate and  the need of switch on/off option..
    cheers
    mohhan

    Aggregrates:
    An aggregate is a rollup of fact data where a total value is sufficient and no detailed information is
    needed.
    So aggregates are like InfoCubes except that they summarize or aggregate data from an InfoCube.
    When you use an aggregate, the summarization it represents does not need to be done during
    runtime.
    Aggregate functions happen in the background. They are not visible to the end-user.
    Aggregates can be created: are not possible for:
    For BasicCubes
    On dimension characteristics
    On navigational attributes
    On hierarchy levels
    Aggregates Cannot be created on
    Multiprovider
    RemoteCube
    Ods Object
    Infoset
    When we need to use aggregates:
    As a rule of thumb, an aggregate is reasonable and may be created if,
    Aggregation ratio >10 i.e, 10 times more records are read than are displayed ( Aggregation ratio = Number of records read from DB / Number of records transferred)
    AND
    Percentage of DB Time > 30% ie; the time spent on database is a substantial part of the whole query runtime
    I hope for creating aggregates i think the thumrule should be successful... But where can we check that Aggregation ratio > 10% and Percentage DB time > 30%.
    AGGREGATES
    There are three methods for viewing this data.
    1. You can see in table RSDDSTAT.
    2. By using t-code ST03
    ( Expert mode -> BW system load -> last minute's load)
    3. By implementing BW Statistics.
    (AWB -> Tools -> BW statistics for info cubes
    o: OLAP (front end)
    w: Warehouse (loading)
    -> Delete specify from date, to date
    -> select cube
    -> execute )
    You will have the following fields
    QDBSL - No. of records selected
    QDBTRANS - No. of records transferred.
    QTIMEOLAP - OLAP time
    QTIMEDB - DB time
    QTIMECLIENT - Front end time
    if QDBSL/QDBTRANS RATIO > 10 AND DBTIME >30 we will use aggregates.

  • Running Cursor script to update Oracle Table.

    I have the following script. I have a cursor in which i perform an update operation on a oracle table. But The table "ICS_TRADE_DETAILS " is not getting updated. Am i doing something wrong? I get the correct values populated in the "lastChanged" and "tradeID" fields.
    Help Appreciated !!!!
    DECLARE
    lastChanged VARCHAR2(32);
    tradeID VARCHAR2(32);
    CURSOR c1 IS
    SELECT TRADEID,LASTCHANGED
    from CVSELECT;
    BEGIN
    OPEN c1;
    LOOP
    FETCH c1 INTO tradeID,lastChanged
    DBMS_OUTPUT.PUT_LINE('lastChanged: '||lastChanged);
    DBMS_OUTPUT.PUT_LINE('tradeID: '||tradeID);
    update ICS_TRADE_DETAILS
    SET LASTCHANGED=lastChanged
    WHERE CTRADEID=tradeID;
    COMMIT;
    EXIT WHEN c1%NOTFOUND;
    END LOOP;
    CLOSE c1;
    END;

    ji li wrote:
    Is this related to someone else pulling data from the table(s) you are updating (and committing frequently)?
    If so, wouldn't the undo segments hold enough of the changed data for the dataset to be consistent?
    The reason I ask is because I've always been of the impression it was better to commit frequently as opposed to doing autonomous (all or none) processing.When you open a cursor, Oracle needs to fetch data as of that particular SCN. So if someone is potentially updating the table while you are reading data, you want to make sure that Oracle will have the UNDO data in hand to be able to get back to the old state. If you commit in the loop, however, Oracle now believes that your session is no longer interested in older UNDO data so it may well purge that data too quickly, causing ORA-01555 errors. Fetching across a commit is almost certainly a bad idea.
    Commit frequency should be driven exclusively by logical units of work. If you have a loop, the logical unit of work is almost always the whole set of rows that you want to process. If processing dies in the middle, you're generally much better off having everything rolled back than in having half the rows processed and not knowing which half were processed and which half were not. If you are processing extremely large numbers of rows (i.e. data warehouse loads), it is sometimes worthwhile to code all the extra logic required to make the process restartable and to commit periodically in order to avoid situations where something dies 2 hours into a run and you have to spend another 2 hours rolling back those changes before you can restart. But that's the exception to the rule and generally only appropriate after spending quite a bit of effort performance tuning which would remove 99% of loops in the first place.
    Justin

  • Dataguard for replication

    Hello there,
    If I am using external Netapps storage and snapmirror to replicate data to my storage array at the DR site do i still need to use dataguard for log shipping?
    I am a little confused where both fit. Are they basically both replication technologies, dataguard being at the database level, and netapps using storage level replication
    Advice would be much appreciated
    Cheers
    C

    Oracle data guard is more for high availability and data redundancy hence it was designed and to be used for that purpose. Like you can only open the database for reading (both logical and physical standby, you have a little bit freedom in logical standby where tables that not being maintained through regenerated SQL transactions allow read/write access)
    Base on your scenario I don't think data guard is good fit for you. As Oracle Relication and Stream are designed for your purpose.
    One of the primary uses of Stream is Data Warehouse Loading .
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_over.htm#CHDHJIBF
    just my 2cents

  • Io exception: Connection refused

    I have a main process flow that incorporates a number of sub-process flows to perform a complete warehouse load. Whenever I execute this process flow, it fails with the error shown below. It fails at a different subprocess each time I run it, and when I execute just that subprocess by itself, the subprocess completes successfully. I have no idea what causes this error or how to prevent it from re-occuring:
    RPE-02083: Process ITM_LOAD has errored Activities. Dependent objects may not have been deployed. You can use Oracle Workflow Monitor to retry the activities or abort the Process.
    RPE-02018: Oracle Workflow schema OWF_MGR on host <hostname> cannot be accessed using service <database name> through port 1521. Please check the location details and try again.
    Io exception: Connection refused(DESCRIPTION=(TMP=)(VSNNUM=153092352)(ERR=12516)(ERROR_STACK=(ERROR=(CODE=12516)(EMFI=4))))
    I would really appreciate any help on this!!

    Julie,
    Can you check the location registration for all locations? It looks like you hit TNS-12516:
    TNS-12516 TNS:listener could not find available handler with matching protocol stack
    Cause: None of the known and available service handlers for the given SERVICE_NAME support the client's protocol stack: transport, session, and presentation protocols.
    Action: Check to make sure that the service handlers (for example, dispatchers) for the given SERVICE_NAME are registered with the listener, are accepting connections, and that they are properly configured to support the desired protocols.
    Mark.

Maybe you are looking for

  • No data in R/3 data source after extraction!

    Hello All, i am extracting the data from R/3 source system ( 4.7 ides system). after succesffully transferring the data source and replicating it to the BW, i had created a infocube and scheduled the info package so that R/3 data source data gets upl

  • How can I Sync my iPhone before restoring?

    I tried to use something called DrFone to restore my calendar, which I had accidentally deleted on my iPhone 4. Didn't see the fine print that said the trial version didn't support back up. Now my phone is totally frozen. When I turn it on, the scree

  • BPM worklist app views are not visible in IPM task viewer

    Hi, we upgraded our system from 11.1.1.3.0 to weblogic 10.3.5.0 SOA 11.1.1.5.0 with BPEL processes ECM with IPM and UCM 11.1.1.5.0 after upgrade i have problem with profiles in IPM task viewer page. Views are created in BPM worklistapp and all users

  • Tokenize input text in Java

    Let's say I have a text file that contains the following paragraphs: "Please keep comments on topic. Remember, the /dev/null forum can be used for test posting." Now I want to put each token, including while space such as " " and newline, into a stri

  • Blocage PSE 13

    Bonsoir je découvre et  utilise depuis peu PSE 13. aucun message d erreur ne m'a été renvoyé  lors de l intallation. toutefois des la première tentative d utilisation, le logiciel à l édition d'une photo c est bloqué et ne repond à aucune commande. a