Data must reside in database?

Question: My reading of the various types of indexes provide by oracle text indicates that a Context index must index local data, i.e., data that physically resides in a database table as a blob, clob, or varchar2 type. Some of the other index types can index data pointed to by URLS (the URL is in the table but the actual document text is not) but Context indexes cannot do this. Is this correct?
How does Ultra Search compare?
Thanks.

Upon further reading, it looks like the CTXRULE stuff is what needs the data in tables vs user datasources. Does anyone know why that is? What is it about classification that dictates where the source documents reside?
Thanks.

Similar Messages

  • FRM-30425: Summarized database item must reside in a

    FRM-30425: Summarized database item must reside in a block with Query All Records or Precompute Summaries set to Yes.
    I am getting this error, when I compile the form. Though I have set the Query All Records to Yes, in the item, as Calculation mode to Summary, Summary Function to Sum and specified the datablock, and item to which needs to be totaled.
    Has anyone got similar error and what was the resolution.
    Your response is greatly appreciated.
    Thanks
    Krishna

    kamaran
    invoke the property pallete for block and set query all records Property Under Records Group to YES
    hope this will help
    siju.koiply

  • ORA-28150 when accessing data from a remote database

    Portal Version: Portal 3.0.9.8.0
    RDBMS Version: 8.1.6.3.0
    OS/Vers. Where Portal is Installed:: Solaris 2.6
    Error Number(s):: ORA-28150
    I have a problem with using a database link to access a table in
    a remote database. So long as the dblink uses explicit logins
    then everything works correctly. When the dblink does not have a
    username then I get the ORA-28150 message. The database link is
    always public. A synonym is created locally that points to a
    synonym in the remote database. I am using the same Oracle user
    in both databases. The Oracle portal lightweight user has this
    same Oracle user as its default schema. The contents of the
    remote table are always visible to sqlplus, both when the link
    has a username and when it doesn't have a username.
    All the databases involved are on the same version of Oracle.
    I'm not sure which Oracle login is being used to access the
    remote database, if my lightweight user has a database schema
    of 'xyz' then does portal use 'xyz' to access the remote
    database? I would be very grateful for any help or pointers that
    might help to solve this problem.
    James
    To further clarify this, both my local and remote databases
    schemas are owned by the same login.
    The remote table has a public synonym.
    The link is public but uses default rather than explicit logins.
    The local table has a public synonym that points to the remote
    synonym via the database link.
    If I change the link to have an explicit login then everything
    works correctly.
    I can view the data in the remote database with TOAD and with
    sqlplus even when the database link has default login rather
    than explicit login.
    This seems to point to Portal as being the culprit. Can anyone
    tell me whether default logins can be used across database links
    with portal?
    TIA
    James

    832019 wrote:
    One way to do this is by creating a database link and joining the two tables directly. But this option is ruled out. So please suggest me some way of doing this.Thus you MUST use two connection strings.
    Thus you are going to be either constructing some intricate SQL dynamically or you are going to be dragging a lot of data over the wire and doing an in memory search.
    Although realistically that is what the database link table would have done as well.
    Might be better to look at moving the table data from one database to the other. Depends on size of course.

  • APEX Application accessing data from two different databases

    Hi All,
    Currently as we all know that APEX Application resides in database and is connected to the schema of that database.
    I want APEX Application to be running and accessing data from two different databases. Elaborating my question,
    Currently, my APEX Production Application is connected with XXXX Schema of DB1 Database(Where APEX Resides). Now I want to add some pages into this APEX Application for REPORT Purpose, But I want to connect this REPORT APEX Pages to get data from Different Schema YYYY for Database DB2.
    Is it possible to configure this scenario?
    The reason for doing this is to avoid the REPORT related (adhoc queries) resource utilization effect on Production DB1 Database.
    Thanks
    Nil

    1. If you do the joining of two or more tables in DB1 then all data is pulled over to DB1 and then the join is executed: so more data over the databaselink and more work for DB1. Better keep the joining stuff where the data resides and just pull exactly that data over that you need.
    2. Don't know about your different block sizes. Seems a nice question for one of the other forums (DBA or SQL).
    3. I mean create synonyms on DB1 for reports VIEWS in DB2.
    Hope all is clear!

  • Importing non-unicode data into unicode 10gR2 database

    Hi:
    I will have to import non-unicode data into unicode 10gR2 database. The systems the data is coming from are the following: CODA, Timberline, COMMS, CMS, LIMS. These are all RDBMS, sql-enabled systems. We are talking about pretty big amounts of data (a couple hundred GB combined).
    Did anybody go through a similar exersize?
    I know I'll have to setup nls_length_semantics to CHAR.
    What other recommendations could you guys give?
    TIA,
    Greg

    I think "nls_length_semantics" isn't mandatory at this point, and you must extract a little quantity of information from every source and do some probes injecting them into the Oracle10g database.

  • How to check data in physical standby database?

    Hi!
    I'm maintaining physical standby database system. everyday, i must check tranfer and apply progress. I known, it operate very good. No archive gap is found.
    But i want to check data in physical standby database that there is consistant with primary database, there isn't? What method do I use? How to do?
    thankS

    I hope the following will help
    Verifying the Physical Standby Database
    ==========================================
    Once you create the physical standby database and set up log transport services, you may want verify that database modifications are being successfully shipped from the primary database to the standby database.
    To see the new archived redo logs that were received on the standby database, you should first identify the existing archived redo logs on the standby database, archive a few logs on the primary database, and then check the standby database again. The following steps show how to perform these tasks.
    Step 1 Identify the existing archived redo logs.
    On the standby database, query the V$ARCHIVED_LOG view to identify existing archived redo logs. For example:
    SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
    2 FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# FIRST_TIME NEXT_TIME
    8 11-JUL-02 17:50:45 11-JUL-02 17:50:53
    9 11-JUL-02 17:50:53 11-JUL-02 17:50:58
    10 11-JUL-02 17:50:58 11-JUL-02 17:51:03
    3 rows selected.
    Step 2 Archiving the current log.
    On the primary database, archive the current log using the following SQL statement:
    SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
    Step 3 Verify that the new archived redo log was received.
    On the standby database, query the V$ARCHIVED_LOG view to verify the redo log was received:
    SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
    2> FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    SEQUENCE# FIRST_TIME NEXT_TIME
    8 11-JUL-02 17:50:45 11-JUL-02 17:50:53
    9 11-JUL-02 17:50:53 11-JUL-02 17:50:58
    10 11-JUL-02 17:50:58 11-JUL-02 17:51:03
    11 11-JUL-02 17:51:03 11-JUL-02 18:34:11
    4 rows selected.
    The logs are now available for log apply services to apply redo data to the standby database.
    Step 4 Verify that the new archived redo log was applied.
    On the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log was applied.
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG
    2 ORDER BY SEQUENCE#;
    SEQUENCE# APP
    8 YES
    9 YES
    10 YES
    11 YES
    4 rows selected.

  • Using streams to capture data from physical standby database.

    Does anybody know if it is possible to use streams to capture data from physical standby database instead of PROD database? The standby database is in read only mode. We use Oracle 11gR2.
    Thanks in advance.

    physical are closed : how will it managed the queues and overspill queues when target is not present? Also the data dictionary must reflect the primary but If you run capture, then you introduce rules that are not on primary: How ?

  • How to setup Data guard for SAP database?

    Can someone please tell me how to setup data guard for SAP databases?
    Thanks,
    Abhi

    Hi Abhi,
    have a look at OSS 105047 - Support for Oracle functions in the SAP environment you find this under
    14
    Oracle Data Guard
    You can use "Physical Standby".
    You cannot use "Logical Standby".
    You are allowed to use Fast Start Failover (FSFO) but SAP Support is not provided.
    You can use Data Guard Broker.
    You can use Maximum Performance Mode, Maximum Availability Mode and Maximum Protection Mode.
    In the case of Maximum Availability and Maximum Protection, you must pay particular attention to a fast network connection in order to avoid performance problems.
    Maximum Protection causes the primary database to terminate if problems occur in the standby database.
    And here you find on Oracle white paper from 2010 http://www.oracle.com/us/solutions/sap/wp-ora4sap-dataguard11g-303811.pdf
    Perhaps some SAP user have answers for you http://scn.sap.com/community/oracle/content?query=guard
    regards
    Kay

  • Data Integration Between Oracle Databases

    Hi everybody,
    I am an oracle database and now I'm experiencing something new in my job. I have to define the way my company will perform a data integration.
    My situation is this: Here, people are constructing a new, huge system called MEGA with its own Oracle Database. But, there are 20 other smaller systems, each one with their own Oracle Database that will share data with MEGA Database.
    Here the most commom situation is that some data must be synchronized in x in x hours between the main MEGA database and the other smaller databases. But some data must be synchronized automatically to provide the online data integrity.
    My doubt is: what's the best way to do this integration? Do I have to use Materialized View, Triggers, procedures or is there a tool that comes with Oracle Database that I can use to simplify this data integration?
    Do I have to create a Database to keep the transactional data?
    If someone can give me any hints, I'll be very thankfuled....
    Bye, bye...

    Hi,
    Depending on your situation and whether your databases are heterogeneous you could start by looking at the following two OTN pages:
    If you require data transformations look at OracleAS InterConnect:
    http://otn.oracle.com/tech/integration/interconnect.html
    Replication with Oracle Streams:
    http://otn.oracle.com/products/dataint/index.html
    Best regards,
    Bart

  • Update/insert/delete data from xcelsius to Database via web service

    Hi,
    I need to create dashboard that go function can <b>update/insert/delete</b> data send to <u>Database</u> thru <u>web services</u>, as i know got 2 xcelsius add-on software which support those of function <b>InfoBurst</b> and <b>flynet </b>
    <b>InfoBurst</b>
    http://www.infosol.com/azbocug/minutes/4-Writeback%20to%20a%20Database%20with%20Xcelsius.pdf
    <b>flynet </b>
    http://www.flynetviewer.com/public/community/Blogs/FlynetXcelsiusServerUser/default.aspx
    Except this 2 purchase add-on xcelsius, any other solution ?  
    Maybe need to write some in MSSQL or C# programming which enable insert, update, delete ...etc  ?
    *note: i not use Xcelsius Engage Server , i use Xcelsius Engage only
    thanks,
    regards
    s1
    Edited by: Leong Pui Kee on Mar 1, 2011 6:06 AM

    Hi,
    As of now in Xcelsius/Dashboard Design there is no feature or functionality to insert/update/delete data from database.
    Solution:
    Create a Web service in let’s say C# or Java, which will perform insert/update/delete operation.
    In Xcelsius add Web Service connection and user above web service.
    Xcelsius Web Service connection provides option to pass input values to a Web Service (Input Pane) and get the result (Output values pane).
    We can pass values to be written to the database as a input to Web Service via Web Service connection from Xcelsius and write data to the database.
    Note:
    Performing delete operation from Xcelsius Dashboard could be risky and may delete important data from database. I would not prefer giving delete option/functionality in Xcelsius dashboard.
    Hope this helps!
    Thank you.
    Regards,
    Vinay Mhaske

  • Move XMLTYPE data to a remote  database

    Hello,
    I need your experience !!!!!
    I am working with the 10.2.1 database.
    I have try to call a remote procedure (using DBLINK) with a XMLTYPE parameter, I get a error link to a lob locator.
    One solution could be to save XMLTYPE to disk.
    Some body know other solution to transfer XMLTYPE data to a remote database.
    Thanks in advance
    Best Regards

    A future alternative will be Oracle streams and b.t.w. one of the advantages (sometimes not, but anyway) of an Oracle database is that you can use object orientated methods, relational methods, java solutions, etc. and nowadays XML methods. XML was once all about transporting data, creating an uniform interface between solutions (databases, application/webservers, technology stacks, etc...). Think out of the Box. XMLDB isn't relational, but it is packaged in one. Make use of it and vice versa.

  • Upload data from excel into database through pl/sql

    Hi All,
    I have excel which contains data lets say employee details,
    I have one upload button ,which is used to upload excel and then i want to map the cell of excel to the database column and through plsql code i want to upload the excel data into database.
    In short ,i want to upload the data from excel into database using plsql code,
    or suggest me any other way to do this.(except the data load method present in apex)
    Thanks,
    Jitendra

    if you use APEX 4 you can define you own table
    the code below is for APEX 3
    PROCEDURE pro_carga_planilla_prosp( p_archivo VARCHAR2) IS
    v_blob_data BLOB;
    v_blob_len NUMBER;
    v_position NUMBER;
    v_raw_chunk RAW(10000);
    v_char CHAR(1);
    c_chunk_len number := 1;
    v_line VARCHAR2 (32767) := NULL;
    v_data_array wwv_flow_global.vc_arr2;
    v_rows number;
    v_sr_no number := 1;
    v_ok boolean := true;
    v_local_ok BOOLEAN := TRUE;
    v_reg_ok NUMBER := 0;
    v_reg_ko NUMBER := 0;
    v_localidad_id NUMBER;
    v_departamento_id NUMBER;
    v_cargo_id NUMBER;
    v_prospecto_id NUMBER;
    v_asesor_id NUMBER;
    V_REG prospectos%rowtype;
    BEGIN
    -- Read data from wwv_flow_files</span>
    select blob_content into v_blob_data
    from wwv_flow_files
    where name= p_archivo;
    v_blob_len := dbms_lob.getlength(v_blob_data);
    v_position := 1;
    -- Read and convert binary to char</span>
    WHILE ( v_position <= v_blob_len ) LOOP
    v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position);
    v_char := chr(hex_to_decimal(rawtohex(v_raw_chunk)));
    v_line := v_line || v_char;
    -- pro_log('linea '||v_line);
    v_position := v_position + c_chunk_len;
    -- When a whole line is retrieved </span>
    IF v_char = CHR(10) THEN
    -- Convert comma to : to use wwv_flow_utilities </span>
    v_line := replace(REPLACE (v_line, ',', ':'), ';',':');
    v_line := replace(replace(v_line, chr(10)),chr(13));
    if substr(v_line,1,1)= ':' then
    v_line := '0'||v_line;
    end if;
    if instr(v_line,':',1,21) = 0 then
    if instr(v_line,':',1,20) = 0 then
    v_line:=v_line||':';
    end if;
    v_line:=v_line||':';
    end if;
    -- pro_log(v_line);
    -- Convert each column separated by : into array of data </span>
    v_data_array := wwv_flow_utilities.string_to_table (v_line);
    -- Insert data into target table </span>
    IF v_data_array(1) IS NOT NULL AND
    v_sr_no <> 1 THEN
    V_REG.NOMBRE:=ltrim(rtrim(v_data_array(2)));
    V_REG.RAZON_SOCIAL:=v_data_array(3);
    V_REG.DIRECCION := v_data_array(4)||' '||v_data_array(5);
    -- PRO_LOG('PROSP 1 ' ||v_sr_no);
    v_localidad_id := pack_empresas.get_localidad(v_data_array(6));
    -- PRO_LOG('PROSP 1.1 '||v_sr_no);
    V_REG.LOCALIDAD_ID:=v_localidad_id;
    -- PRO_LOG('PROSP 1.2 '||v_sr_no);
    V_REG.CODIGO_POSTAL:=LTRIM(RTRIM(v_data_array(7)) );
    -- PRO_LOG('PROSP 1.3 '||v_sr_no);
    -- PRO_LOG('PROSP 1.1 '||v_sr_no);
    v_departamento_id := pack_empresas.get_departamento(v_data_array(8));
    -- PRO_LOG('PROSP 1.4 '||v_sr_no);
    V_REG.DEPARTAMENTO_ID:=v_departamento_id;
    -- PRO_LOG('PROSP 1.5 '||v_sr_no);
    V_REG.TELEFONO:=v_data_array(9);
    --PRO_LOG('PROSP 1.6 '||v_sr_no);
    V_REG.TELEFONO2:=v_data_array(10);
    -- PRO_LOG('PROSP 1.7 '||v_sr_no);
    V_REG.RUBRO:=v_data_array(11);
    -- PRO_LOG('PROSP 1.8 '||v_sr_no);
    V_REG.RUC:=ltrim(rtrim(v_data_array(12)));
    -- PRO_LOG('PROSP 1.9 '||v_sr_no);
    -- pro_log(v_data_array(1));
    -- pro_log(v_data_array(2));
    V_REG.CANTIDAD_EMPLEADOS:=RTRIM(LTRIM(v_data_array(13)));
    -- PRO_LOG('PROSP 1.10 '||v_sr_no);
    -- pro_log(v_data_array(14));
    V_REG.CANTIDAD_BENEFICIARIOS:=RTRIM(LTRIM(v_data_array(14)));
    --PRO_LOG('PROSP 1.11 '||v_sr_no);
    V_REG.MAIL:=v_data_array(19);
    -- pro_log(V_REG.MAIL);
    -- PRO_LOG('PROSP 1.12 '||v_sr_no);
    -- v_data_array(20):= replace(replace(v_data_array(20),chr(10)),chr(13));
    if not v_data_array.exists(20) then
    -- pro_log('existe');
    -- pro_log(ltrim(rtrim(replace(replace(v_data_array(20),chr(10)),chr(13)))));
    V_REG.Proveedor:= ltrim(rtrim(replace(replace(v_data_array(20),chr(10)),chr(13))));
    else
    v_data_array(20):=null;
    end if;
    -- V_REG.PROVEEDOR:=v_data_array(20);
    -- PRO_LOG('PROSP 1.13 '||v_sr_no);
    if not v_data_array.exists(21) then
    V_REG.OBSERVACIONES:=v_data_array(21);
    else
    v_data_array(21):=null;
    end if;
    -- PRO_LOG('PROSP 1.14 '||v_sr_no);
    -- PRO_LOG('PROSP 1.2 '||v_sr_no);
    insert into prospectos (nombre,razon_social, direccion,localidad_id,codigo_postal,
    departamento_id, telefono, telefono2, rubro,ruc,cantidad_empleados,
    cantidad_beneficiarios,mail,proveedor,observaciones)
    values (nvl(ltrim(rtrim(v_data_array(2))),v_data_array(3)), v_data_array(3),
    v_data_array(4)||' '||v_data_array(5),
    v_localidad_id, LTRIM(RTRIM(v_data_array(7))),v_departamento_id, v_data_array(9),
    v_data_array(10),v_data_array(11), ltrim(rtrim(v_data_array(12))), RTRIM(LTRIM(v_data_array(13))),
    RTRIM(LTRIM(v_data_array(14))),v_data_array(19),v_data_array(20), v_data_array(21))
    returning prospecto_id INTO v_prospecto_id;
    -- PRO_LOG('PROSP 2');
    v_cargo_id := pack_empresas.get_cargo(v_data_array(17));
    -- PRO_LOG('PROSP 3');
    insert into prospecto_contactos (prospecto_id,nombre,apellido,cargo_id,
    telefono,mail)
    values (v_prospecto_id, nvl(v_data_array(15),'S/N'), nvl(v_data_array(16),'S/A'),
    v_cargo_id, v_data_array(18), v_data_array(19));
    -- PRO_LOG('PROSP 4');
    v_asesor_id := pack_empresas.get_asesor(v_data_array(1));
    -- PRO_LOG('PROSP 5');
    insert into asignaciones (prospecto_id,asesor_id,fecha_asignacion)
    values (v_prospecto_id, v_asesor_id, trunc(sysdate));
    -- PRO_LOG('PROSP 6');
    END IF;
    -- Clear out
    v_line := NULL;
    v_sr_no := v_sr_no + 1;
    END IF;
    END LOOP;
    delete wwv_flow_files
    where name= p_archivo;
    END pro_carga_planilla_prosp;
    function hex_to_decimal
    --this function is based on one by Connor McDonald
    --http://www.jlcomp.demon.co.uk/faq/base_convert.html
    ( p_hex_str in varchar2 ) return number
    is
    v_dec number;
    v_hex varchar2(16) := '0123456789ABCDEF';
    begin
    v_dec := 0;
    for indx in 1 .. length(p_hex_str)
    loop
    v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1;
    end loop;
    return v_dec;
    end hex_to_decimal;

  • How to get data into the mySQL database?

    First some background.
    I have a website that has outgrown its designed dimensions and is a huge burden to maintain. See PPBM5 Benchmark
    There is a lot of maintenance work involved, so I'm investigating a PHP/MySQL approach to easen the burden and to add functionality to the site. With the current Excel based structure and over 420 entries, it is cumbersome for me to maintain, but also for users to find what they need.
    A MySQL based dynamic structure is a lot easier and offers vastly more selection capabilities, like selecting only records that meet specific criteria.
    Data submission is done with a form, that contains most of the relevant data, but the drawack is that people submitting their data are often not technically inclined, give wrong answers due to a lack of understanding or making typo's. The test results are attached in one or two separate .txt files, but often they have not read the instructions correctly or did something wrong, so these attached .txt files can not be trusted automatically, they have to be checked before inclusion.
    These were my initial thoughts:
    1. Data collection:
    To avoid spending all our energy and time  on correcting typo's, getting missing data, correcting errors, I am  investigating the use of CPU-Z in Ghost mode to create a .txt or .html  file that contains all relevant hardware info we need and even more. It gives all the info we currently have, but adds  data like number of memory sticks, DDR timings, stock clock speed and  BCLK setting, video card info and VRAM size, etc.
    To see what I mean, run CPU-Z, go to the About tab and press the Save Report button and look at the results.
    This can all be done without user intervention in an automatic way, but  maybe I need to add an Auto-It file to the test to make it all run as  desired.
    If this works and I'm able to extract the relevant data from the created  file and can insert it into the database, we may be in business for the  next version of PPBM5.5 or PPBM6. It does require a modification to the instructions, making them a lot  easier, because there is less data to fill out.
    2. Data submission:
    The submission form can be simplified if  the CPU-Z data can be used. We have to create an automatic way to attach  the created .html file from CPU-Z to the submission form and we have to  streamline the Output.txt and Output-MPE.txt files to be more easily included in the 'form.lib.php' file. It  currently is manual labor and very time consuming.
    3. Adding to Database:
    I have to find a way to create database  records from the Gmail forms I receive. All incoming mail messages need  to be checked on relevancy and if relevant, need to be added  automatically to the database and then offered for approval before final inclusion in the database. Data included in the database  will then include submission date and time, Email address,  IP address  used, plus links to the files submitted and available on the website.
    4. Publication of the database:
    After approval of new records from step  3, all updates will be automatically applied to the database and  accessible for users. I do not yet intend to introduce a user account ,  requesting login before all functionality is accessible. Too much trouble and administration.
    Queries should be possible on things like CPU (check box), so include  17-920, i7-930, i7-950 but exclude i7-980X and i7-990X, Size of memory  (check box), Overclocked (boolean, yes, no), SSD as OS disk, and similar  options.
    The biggest problem is to keep the color grading and statistical  indicators (Top, D9, Q3, Med, Q1 and D1) intact on dynamically generated  queries. Say you make a query which results in 20 observations, this  should show the related colors and legends. Next query results in 48 observations and of course the color grading and legends  do need to reflect that. Question in my mind, does the RPI remain  constant, independent of the query or does that need to be recalculated  on the basis of the query?
    Next thing is to allow a user to select a specific observation and by  simply clicking on it be shown, in a separate window (detail page) or  accordion, all the CPU-Z related information about the hardware.
    The graphs, Top-20 and MPE Gains, need to be dynamically adjusted, based on the query used.
    5. Ideally, external links:
    In an ideal situation, one could link the  CPU-Z data to external price databases, looking up current prices for  CPU, memory, video card, disks, raid controller, etc. to get instant  BFTB charts, based on the query made. But that is the next step.
    Situation now:
    I have a MySQL database that is easily updated with the new submissions. Simply create a .CSV flie from the submitted forms and import that into the database. The bulk of the initial work is done.Lots remain to be done as you can see above, but that is for a later time.
    Question:
    I have this table, that needs to be filled with data in the submitted and attached files. Mr. X submitted his data and can be uniquely identified by his "Ref_ID". He attached one or two files in .TXT format with the relevant test data. These files are stored on the server with a concatenated name:
    "Ref_ID","-","filename"
    Say his Ref-ID is: 20110204-6cf5 and his submitted file is called: Output(99).txt then the file can be found on the server as
    20110204-6cf5-Output(99).txt
    I need to be able to open that comma delimited file, the contents may look like this: "439","1036","819","531" and insert these contents into the relevant record and fields.
    Graphically,
    is what I want to achieve.
    This being my first exposure to PHP/MySQL, you can imagine I'm not clear on how to go from here.
    Added complication is that I actually have 5 numbers to insert per record and two calculated fields, Total Score and RPI should be calculated fields. Haven't yet figured out how to handle calculated fields, maybe only in the PHP/HTML code and not in the database.
    I hope someone can help me.

    You do have a very complex looking site and may need several tables in mysql to handle all that data. If you knew to phpmysql I would suggest taking a look at this tutorial it will help get you started in understanding how to $_GET info from a database and also how to $_POST data to a database. I am no expert just learning myself and I found this very helpful. This is the link http://www.adobe.com/devnet/dreamweaver/articles/first_dynamic_site_pt1.html
    There are also many tutorials on Youtube to help build a CMS Content Management Site I would suggest the following: -
    http://www.youtube.com/user/phpacademy
    http://www.youtube.com/user/betterphp
    http://www.youtube.com/user/flashbuilding
    And many more on my channel here
    http://www.youtube.com/user/Whisperingonthewind
    CMS's are easier to maintain, add edit and delete content.
    I have also recently bought a Book by David Powers Training from the Source very helpful.
    Anyway hope you get it sorted.

  • How to store measurement data in a single database row

    I have to store time-data series into a database and do some analysis using Matlab later. Therefore the question might be more a database question rather than Diadem itself. Nevertheless I'm interested if anyone has some best practices for this issue.
    I have a database which holds lifecycle records for certain components of same nature and different variants. Depending on the variant I have test setups which record certain data. There is a common set of component properties and a varying number of environmental measurements to be included. Also the duration of data acquisition varies across the variants.
    Therefore having tables appears to be non-optimal solution for storing the data because the needed number of columns is unknown. Additionally I cannot create individual tables for each sample of a variant. This would produce to many tables.
    So there are different approaches I have thought of:
    Saving the TDM and TDX files as text respectively as BLOB
    This makes it necessary to use intermediate files.
    Saving the data as XML text
    I don’t know yet if I can process XML data in Matlab
    Has anybody an advice on that problem?
    Regards
    Chris

    Chris
    Sorry for the lateness in replying to your post. 
    I have done quite a bit of using a Database to store test results.  (In my case this was Oracle DB, using a system called ASAM-ODS)
    My 2 Cents:
    Three functions were needed by users for me.  1) To search and find the tests,  and  2)  To take the list of Tests and process the data into a report/output summary. 2) If the file size is large, then being able to import the data quickly into analysis tool speeds up processing.
    1) Searching for test results.  This all depends on what parameters are of value for searching.  In practice this is a smaller list of values(usually under 20), but I have had great difficulty getting users to agree on what these parameters are. They tend to want to search for anything.   The organization of the searching parameters has direct relationship to how you can search.   The NI Datafinder does a nice job of searching for parameters, so long as the parameter data are stored in properties of Channel Groups or Channels. It does not search or index if the values are in channel values.
    Another note: Given these are searchable parameters, it helps greatly if these parameters have a controlled entry, so that the parameters are consistent over all tests, and not dependent on free form entry by each operator. Searching becomes impossible if the operators enter dates/ names in wildly different formats.
    2) Similar issue exists if put values into databases. (I will use the database terms of Table and column(Parameter) and Row (instance of Data that would be one test record.)
    The sql select statement, can do fast finds, if store the searchable parameters in rows of a table. Where would have one row for each Test Record.   The files I worked with have more than 2000 parameters.   Making a table that would store all of these, and be able to search for all of these, makes a very large number of rows. I did not like this approach, as it has substantial maintenance time, and when changes are made, things get ugly quick.
    3)This is where having the file format be something that the analysis tool can quickly load up is beneficial.   Especially if the data files are large. In DIAdem's case, it reads TDM,TDMS files very quickly into the dataportal.   It can also read in the MDF or HDF files, but these are hierarchical structures that require custom code to traverse the information, and get the information into dataportal to use in Analysis /reporting. (Takes more time to read data, but have much more flexibility in the data structure than the two level tdm/tdms format.)
    My personal preferences
    I would not want to put the test data into a Table row. Each of the columns would be fixed and the table would be very wide in columns.
    >
    I personally like to put the test Data into a file, like TDMS, MDF, or HDF and then the database would have a entry for the reference to the attachment. The values that are in the database is just the parameters that are used for test Searching, either in datafinder or in sql commands in the user interface.
    Hopefully these comments help your tasks some.
    Respectfully,
    Paul
    tdmsmith.com

  • Exporting data from SQL Server database to Oracle database

    Hello All,
    We need to replicate a table's data of SQL Server database to Oracle database.
    Can this task be accomplished using Import/Export wizard or Linked servers?
    Can help me regarding which Oracle data access components should i download to do this?
    I am using SQL Server 2012.
    And i have Oracle 11g release 2 client installed in my system.
    Thanks in Advance.
    Thanks and Regards, Readers please vote for my posts if the questions i asked are helpful.

    Yes you can definitely transfer data from SQL server to Oracle Have a look at below links
    Export SQL server data to Oracle Using SSIS
    Use OLEDB data provider to transfer data from SQL server to Oracle
    Using Import Export Wizard
    Similar thread
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

Maybe you are looking for