Oracle column store db
I've heard that Oracle 12c will have a column store options similar to what companies like Vertica and SAP Hana have. In a very general question, is this available in the current Oracle 12c release? If so, is anyone using this feature and what is your opinion on the performance side for analytics, such as rapid slicing and dicing of the data.
Hi,
I think you may be referring to the new 'In Memory' option for 12c where oracle will store the data in memory in both 'normal' and columnar format to help with analytics queries (and general performance). This is the thing Larry referred to as 'ungodly speed'.
This should be coming in 12.1.0.2 from what I heard which based on the normal release schedule is probably Q3 this year.
Not sure if it will be a cost option or not but i would guess it will be.
Rich
Similar Messages
-
Save HTML data in a Oracle Column
what would be the best way to Save HTML data in a Oracle Column?
while varchar2 can be used for upto 4000 bytes. it would still mean escaping a lot of special character. Is there a better way to do this? any help would be greatly appreciated.Besides the XML types available to you and the associated Oracle provided packages to input and extract XML I have heard arguments that both should be stored in the database. That is you should store the extracted data in normal Oracle columns so it can be used like any other attribute and that you should store the XML as XML which can then be used as XML.
For data that is only inserted and deleted I can see this method but if updates to information within the XML is required then you just added another set of work requirements and complexity.
Who is going to access the data? What tools are the users going to use? Where else does the data need to be provied to and in what format? The answers to who and how the data will be used should provide you with the answer of what form the data should be stored in.
My personal view is that a relational database should be used for what it was designed for, storing relational data.
HTH -- Mark D Powell -- -
While defining a columnar table, what is the purpose of column store type
Hi folks
I have two questions related to columnar table definition.
1. What is the purpose of column store type.
While defining a columnar table, what is the purpose of column store type (STRING ,CS_FIXEDSTRING,CS_INT etc) , when I define a table using the UI I see that the column is showing STRING but when I goto EXPORT SQL it does not show. Is this mandatory or optional ?
2.VARCHAR Vs. CHAR - In the UI when I create the table I do not see the CHAR option , but I do see lot of discussion where people are using CHAR for defining the columnar table. Not sure why UI dropdown does not show it. I also read that we should avoid using VARCHAR as those columns are not compressed, is that true, I thought the column store gives compression for all the columns. Are there certain columns which cannot be compressed .
Please let me know where I can find more information about these two questions.
PoonamHi Poonam
the CS_-data types are the data types that are used internally in the column store. They can be supplied but it is not at all required or recommended to do so.
SAP HANA will automatically use the correct CS_-data type for every SQL data type in your table definitions.
To be very clear about this: don't use the CS_-data types directly. Just stick to the SQL data types.
Concerning VARCHAR vs CHAR: fixed character data types are not supported anymore and don't show up anymore in the documentation.
I have no idea why you believe that VARCHAR columns are not compressed but this is just a myth.
create column table charcompr (fchar char(20), vchar varchar(20));
insert into charcompr (
select lpad ('x', to_int (rand()*20), 'y'), null from objects cross join objects);
-- same data into both columns
update charcompr set vchar = fchar;
-- perform the delta merge and force a compression optimization
merge delta of charcompr;
update charcompr with parameters ('OPTIMIZE_COMPRESSION' ='FORCE');
-- check the memory requirements
select COLUMN_NAME, MEMORY_SIZE_IN_TOTAL, UNCOMPRESSED_SIZE, COUNT, DISTINCT_COUNT, COMPRESSION_TYPE
from m_cs_columns where table_name ='CHARCOMPR'
COLUMN_NAME MEMORY_SIZE_IN_TOTAL UNCOMPRESSED_SIZE COUNT DISTINCT_COUNT COMPRESSION_TYPE
FCHAR 3661 70285738 6692569 20 RLE
VCHAR 3661 70285738 6692569 20 RLE
We see: compression and memory requirements are the same for both fixed and variable character sizes.
- Lars -
SAP HANA XSODATA service (Service exception: column store error.)
Hi all,
i have a problem with my calculation view using xsodata service on it. (There's an input parameter called P
_SWERK)
In my calculation view, the data origin are two analytic views (on which the input parameter P_SWERK should be filter data at beginning of the sql script code).
First i read the analytic views with function CE_OLAP_VIEW and after i do a CE_PROJECTION function on them using the input parameter P_SWERK like a filter on field SWERK.
But when i run my application on browser the following error occurs :
<message xml:lang="en-US">Service exception: column store error.</message>
The link is this :
http://host:port/Project_DM/services/Test/TEST_ZIIG_PDM_CALC_VIEW_FINAL_service.xsodata/PianiDiManutenzioneParameters(P_SWERK='CO05')/Results
The service definition is :
service {
"EricssonItalgas/TEST_ZIIG_PDM_VIEW_FINAL.calculationview" as "PianiDiManutenzione" keys generate local "ID"
parameters via entity;
The SAP HANA AWS revision is 60.
Someone could you help me,please?
Thanks in advance.
Dario.Hi Dario,
Does the calculation view work without xsodata service? From the URL, your XS project name should be Project_DM, but from the xsodata source, the project name is EricssonItalgas. I'm confused with this. Did you use rewrite_rules or?
Best regards,
Wenjun -
Hi All,
I get the below error when i load my xsjs file in browser,
Error while executing query: [dberror(PreparedStatement.executeQuery): 2048 - column store error: column store error: [2950] user is not authorized : at ptime/session/dist/RemoteQueryExecution.cc:1354]
I am able to execute the same query in HANA SQL editor
Please note that ,No Analytical privileges are applied to the view.
Could you please help solving this issue.
Regards,
Logesh Kumar.Hay,
are you using the same Database user for both SQL Editor and XSJS ?
try the following.
Before executing the query , display it and copy the same from browser and execute in SQL editor.
Put the statement in a try catch block and analyse the exception .
Sreehari -
Hi Experts,
We got a problem here.
We have a query ran without any problems in RSRT. But when I try to put it as data sources(infoprovider) in APD.
I got following error:
column store error: fail to create scenario: [340 2048
We are in BW731006 & ABAP731006.
Any advise?
Thanks in advance!
TengranHi,
Can you check this part in error mentioned by clicking as highlighted in red box. It will give you a correct idea about the issue ::
Also share the detailed error with us as well.
Best Regards,
Arpit -
Problem in build Oracle Fusion Store Front Demo Application
I downloaded the "Oracle Fusion Store Front Demo Application" from http://www.oracle.com/technology/products/jdev/samples/fod/index.html,
and tryed to build it after fininshing editing the "build.properties" in JDeveloper 11g, but four errors appeared ,the message in the log window is:
D:\jdeveloper11\workspace\fod\Infrastructure\Ant>
D:\jdeveloper11\jdk\bin\javaw.exe -classpath D:\jdeveloper11\jdev\lib\ojc.jar;D:\jdeveloper11\ant\lib\ant-oracle.jar;D:\jdeveloper11\jdev\lib\jdev.jar;D:\jdeveloper11\ant\lib\ant-apache-regexp.jar;D:\jdeveloper11\ant\lib\ant-jdepend.jar;D:\jdeveloper11\ant\lib\ant-icontract.jar;D:\jdeveloper11\ant\lib\ant-vaj.jar;D:\jdeveloper11\ant\lib\ant-apache-oro.jar;D:\jdeveloper11\ant\lib\ant-junit.jar;D:\jdeveloper11\ant\lib\ant-swing.jar;D:\jdeveloper11\ant\lib\xercesImpl.jar;D:\jdeveloper11\ant\lib\xml-apis.jar;D:\jdeveloper11\ant\lib\ant-apache-log4j.jar;D:\jdeveloper11\ant\lib\ant-trax.jar;D:\jdeveloper11\ant\lib\ant-apache-bcel.jar;D:\jdeveloper11\ant\lib\ant-stylebook.jar;D:\jdeveloper11\ant\lib\ant-xslp.jar;D:\jdeveloper11\ant\lib\ant-jai.jar;D:\jdeveloper11\ant\lib\ant-javamail.jar;D:\jdeveloper11\ant\lib\commons-net-1.3.0.jar;D:\jdeveloper11\ant\lib\ant-apache-resolver.jar;D:\jdeveloper11\ant\lib\ant-xalan1.jar;D:\jdeveloper11\ant\lib\ant-weblogic.jar;D:\jdeveloper11\ant\lib\ant-commons-net.jar;D:\jdeveloper11\ant\lib\ant-jmf.jar;D:\jdeveloper11\ant\lib\ant-launcher.jar;D:\jdeveloper11\ant\lib\ant-apache-bsf.jar;D:\jdeveloper11\ant\lib\jakarta-oro-2.0.8.jar;D:\jdeveloper11\ant\lib\ant-starteam.jar;D:\jdeveloper11\ant\lib\ant-netrexx.jar;D:\jdeveloper11\ant\lib\ant-jsch.jar;D:\jdeveloper11\ant\lib\ant.jar;D:\jdeveloper11\ant\lib\ant-commons-logging.jar;D:\jdeveloper11\ant\lib\ant-nodeps.jar;D:\jdeveloper11\ant\lib\ant-antlr.jar;D:\jdeveloper11\jdk\lib\tools.jar -Djdev.ant.port=1825 -Dant.home=D:\jdeveloper11\ant -Djdev.ant.debug.port=1826 org.apache.tools.ant.Main -logger oracle.jdevimpl.ant.runner.OutOfProcessAntLogger -inputhandler oracle.jdevimpl.ant.runner.OutOfProcessInputHandler -f D:\jdeveloper11\workspace\fod\Infrastructure\Ant\build.xml -Doracle.home=D:\jdeveloper11\ -listener oracle.jdevimpl.debugger.ant.DebugBuildListener buildAll
Buildfile: D:\jdeveloper11\workspace\fod\Infrastructure\Ant\build.xml
Debugger connected to local process.
createDatabase:
refreshSchema:
BUILD FAILED
D:\jdeveloper11\workspace\fod\Infrastructure\Ant\build.xml:16: The following error occurred while executing this line:
D:\jdeveloper11\workspace\fod\Infrastructure\DBSchema\build.xml:89: The following error occurred while executing this line:
D:\jdeveloper11\workspace\fod\Infrastructure\DBSchema\build.xml:54: The following error occurred while executing this line:
D:\jdeveloper11\workspace\fod\Infrastructure\DBSchema\build.xml:26: java.sql.SQLException: Io 异常: The Network Adapter could not establish the connection
Total time: 4 seconds
Process exited.
Debugger disconnected from local process.
note:
I have followed the "readme.html" in the "fod.zip" file exactly,
also there is no problem with my network adapter ,I can connect to the Internet without any problem;
Can someone give me the answer?I also have this problem. When I build the build.xml file, errors happen:
Buildfile: D:\soft\jdevstudio1111TP2\mywork\storefront_techpreview2\Infrastructure\Ant\build.xml
createDatabase:
refreshSchema:
BUILD FAILED
D:\soft\jdevstudio1111TP2\mywork\storefront_techpreview2\Infrastructure\Ant\build.xml:20: The following error occurred while executing this line:
D:\soft\jdevstudio1111TP2\mywork\storefront_techpreview2\Infrastructure\DBSchema\build.xml:89: The following error occurred while executing this line:
D:\soft\jdevstudio1111TP2\mywork\storefront_techpreview2\Infrastructure\DBSchema\build.xml:54: The following error occurred while executing this line:
D:\soft\jdevstudio1111TP2\mywork\storefront_techpreview2\Infrastructure\DBSchema\build.xml:26: D:\softjdevstudio1111TP2\jdbc\lib not found.
Total time: 0 seconds
My build.properties is set as following (meanwhile, I can connect database 10gR2):
# Master Ant properties file for Fusion Order Demo
# All build files refer to this master list of properties
# $Id: build.properties 812 2007-02-20 07:14:33Z lmunsing $
# Base Directory for library lookup
jdeveloper.home=D:\soft\jdevstudio1111TP2
src.home=..//..
# JDBC info used to create Schema
jdbc.driver=oracle.jdbc.OracleDriver
jdbc.urlBase=jdbc:oracle:thin:@localhost
jdbc.port=1521
jdbc.sid=orcl
# Information about the default setup for the demo user
db.adminUser=system
db.adminUser.password=welcome1
db.demoUser=FOD
db.demoUser.password=welcome1
db.demoUser.tablespace=USERS
db.demoUser.tempTablespace=TEMP
-------------------------------------------------------------------------------------- -
Column store errors during runtime
Hi,
I created a HANA instance in the trial landscape and directed my application to use the HANA instance schema. Over time, I get errors:
SAP DBTech JDBC: [2048]: column store error: search table error: [29] attribute not defined for physical index;internal error: attribute 'NEO_29J91OPIP2IAEWXFUZRHT093R:CA_EV_VENUEen/PK_EVENT_VENUE_ID' not found in indexInfoMap
I'm using the HANA instance instead of the default schema created from the application because I need to run spatial queries, which is currently only available in the HANA instance.
Does anyone know what's wrong?Additional errors from the HCP logs:
javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.0.v20130507-3faac2b): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [2048]: column store error: search table error: [10000008] exception 10000008:
ims_search_api/Delta/impl/DeserializerContext.cpp:2099
Failed to write data into NEO_95BOXA4K094KH5V4JNKE5KDPZ:CA_EV_VENUE$delta_1$en: Attribute engine failed(6900); $function$=writeDataIntoDelta
Error Code: 2048
Call: SELECT PK_EVENT_VENUE_ID, ADDRESS, DESCRIPTION, EMAIL, EXTERNAL_ID, FAX, LATITUDE, LONGITUDE, NAME, PHONE, WEBSITE, FK_EVENT_DATA_PROVIDER_ID FROM CA_EV_VENUE WHERE ((FK_EVENT_DATA_PROVIDER_ID = ?) AND (EXTERNAL_ID = ?))
bind => [2 parameters bound]
Query: ReadAllQuery(name="EventVenue.findByProvider" referenceClass=EventVenue sql="SELECT PK_EVENT_VENUE_ID, ADDRESS, DESCRIPTION, EMAIL, EXTERNAL_ID, FAX, LATITUDE, LONGITUDE, NAME, PHONE, WEBSITE, FK_EVENT_DATA_PROVIDER_ID FROM CA_EV_VENUE WHERE ((FK_EVENT_DATA_PROVIDER_ID = ?) AND (EXTERNAL_ID = ?))")
The table CA_EV_VENUE contains spatial column and only this table is "failing". Other tables doesn't contain spatial columns are fine.
I'm trying to get HANA DB traces from the HCP team and was wondering which traces will be helpful for the HANA team. -
Converting column store to row store
Hello Everyone,
I have a question related to the Column store to Row Store conversion:
Is it always necessary to perform the conversion when all connections to the system have been stopped from the application level?
what would be the loss if the conversion is performed with the applications or connections to the system still open?
Regards,
VinayHi Vinay,
Can you explain your questions more clearly? Do you mean convert a column table to a row table?
Best regards,
Wenjun -
Support for triggers for tables with clustered column-store indexes
Hello,
We are very excited about the new clustered column-store indexes in SQL 2014 and our performance tests shows significant performance gains. However, our existing functionality relies on triggers and they are not allowed on tables with column-store
indexes. Does anybody have information as to whether allowing triggers on tables with column-store indexes is anywhere on the radar?
Thank you!
P.S. We spent a lot of time considering various work-around to avoid triggers, but the amount of work we would have to do is simply overwhelming, so we are considering delaying our upgrade until such functionality hopefully becomes available.My gut reaction is that this restriction is likely to remain for the forseeable future. Columnstore indexes are intended for data warehouses, and while I don't do data warehouses myself, I cannot say that a fact table is where I expect to find a trigger.
But if you think that there is a good business case for permitting triggers on columnstore tables, submit a suggestion on
https://connect.microsoft.com/sqlserver/feedback
Note that if you give some business-oriented explantion why this is a great benefit, it is more likely that Microsoft is likely to listen.
Erland Sommarskog, SQL Server MVP, [email protected] -
Usage of sequence for uploading data into tables(column store) using CSV
How to make use of sequence when data is uploaded into table(column store) with the CSV file ?
Hi Sharada,
You may have to follow the below steps:
1) To load the data from flat file to a Staging table using IMPORT command
2) You have to call the procedure to load the data further into final Table.
a) Have a look on this procedure to load data :
SAP HANA: Generic Procedure using Arrays to Load delta data into Tables
b) See my reply on the syntax on how to use "Sequence"
Auto-Incrementor in stored procedure!
You will have to ensure that you frame your insert query in such a way to have the sequence number in the target table.
Regards,
Krishna Tangudu -
Disadvantages of Oracle RDF Store
Hello all,
what are in your opinion the main disadvantages of the Oracle RDF Store? What are the disadvantages compared to other repository systems (e.g. Fedora Commons)?
It would be great to get some feedback from anyone who has experienc in daily work and development with Oracle RDF Store?
Thanks in advance and
best wishes,
OliverI do a comparison and have enough positive facts (but
if there are any others I would take them, too). I
compare .NET, Forms, OAF and APEX with Pros and Cons,
so I need the Cons of Forms ;-)Your comparison logic is incorrect. (sorry to say it, but I couldn't help my self).
you are trying to compare a peanut with an apple and a cucumber and a water melon, they cannot be compared.
so are these products, each of them is under seperate category, and if you want to choose one, you need to think what your business needs are.
Is it for a small business, what are the transaction types? do you need OLAP? how many users? integration? platform? interface...... (you want me to continue :-) ???? )
.NET, try to deploy an application written in .NET against a database other than ACCESS or SQL server.....
APEX it's complete different technology than forms, the requirement needs can't be applied on both.
Tony -
SQL syntax error different between enviroments after converting to column store indexes
We have two 'identical' SQL boxes with SQL 2012 enterprise edition installed. In our Developer enviroment only, we just converted to Column Store indexes. Now when we run this SQL, we get the following error message, ONLY in the Dev enviroment
SELECT DISTINCT RatingDescription
FROM Rating
where RatingDescription > 1
Msg 245, Level 16, State 1, Line 2
Conversion failed when converting the varchar value 'No Rating Needed' to data type int.
However, this SQL works in our production enviroment and the prod enviroment has the same data and schema as DEV. As soon as we disable the column store index on the dev box, the SQL works without any problems. Is this a known feature or bug
that anyone knows about?
Additional info:
The RatingDescription field is a varchar(100) and contains numbers and text.
Current SQL server editions on both servers:
Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)
Enterprise Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: )So I need to amend my statement.
This statement does fail as everyone is saying:
SELECT DISTINCT RatingDescription
FROM Rating
where RatingDescription > 1However, this statement does not failSELECT DISTINCT RatingDescription
FROM Rating
where RatingDescription > 1
AND analystName = 'Bob'I left out the analystName critiera because I didnt think it mattered but clearly it allows the SQL to run because Bob has no text for ratings. However my finding are still true, that this exact SQL runs successfullywhen I disable the CS index, and as soon as I enable/rebuild it it fails with the message Conversion failed when converting the varchar value. -
How to trigger alert Column store unload for reason :unused resource
Good day,
I have an issue with the column store unload.
As for this alert in hana studio, there are 3 reasons for the alert. Reason: Low memory, Explicit, Unused resource.
First one is lack of memory, so the column table is unloaded. Second is because someone unloads a column table by SQL. These two reason we can see easily and they are usually main reasons for this alert. However, i can't trigger the last one.
Is there any good method to trigger it?
I attached some screens files of the workaround.
Any help will be appreciated.
Thank you!
Best regards
Geoffrey Zhang
Message was edited by: Tom FlanaganHi Geo
not too sure if the idea for KBAs has changed but in my understanding they were never meant to replace documentation or to provide complete topic overviews.
Given the nature of this alert, it would be weird to
a) get an alert at all for this
b) have someone reacting to it.
The function to unload containers after they haven't been used in a long time is part of the garbage collection in SAP HANA. Why would it be necessary to have an alert for that?
And then what should anyone do about it? What do DBAs usually do, when long unused data gets displaced to disk?
Exactly: nothing
Concerning my remark about SCN usage of SAP employees: I meant that there are plenty of SAP internal forums and communities available where you can probably get less exposure to the rest of the world and might get answers that provide you with more detailed information than what is possible in SCN.
As English is the corporate language and used in SCN as in the internal forums I don't see how that is an obstacle for using the internal forums.
About your unanswered questions: just like any other forum, SCN is based on voluntary contributions. There should be no expectation about how long it takes to get answers.
Nobody is paid to answer the questions here, so it's all on a community basis.
- Lars -
[Oracle 11g] Store filename as VARCHAR2 and its content as XMLType
Hi all,
The version of Oracle used is 11.2.0.3.0.
I would like to load a XML file into a table from AIX with a Shell script.
Here is the test case table:
ALTER TABLE mytable DROP PRIMARY KEY CASCADE;
DROP TABLE mytable CASCADE CONSTRAINTS;
CREATE TABLE mytable (
filename VARCHAR2 (50 BYTE),
created DATE,
content SYS.XMLTYPE,
CONSTRAINT pk_mytable PRIMARY KEY (filename) USING INDEX
XMLTYPE content STORE AS BINARY XML;The problem is to store the the file name too.
So I add a step to create the control file from a generic one like this:
#!/bin/ksh
FILES=$(sample.xml)
CTL=generic.CTL
for f in $FILES
do
cat $CTL | sed -e "s/:FILE/$f/g" > $f.ctl
sqlldr scott/tiger@mydb control=$f.ctl data=$f
rc=$?
echo "Return code: $rc."
doneThe filename and the data are stored in the table, but I get this error message after executing the Shell script:
SQL*Loader: Release 11.2.0.3.0 - Production on Mon Jun 11 13:42:21 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
SQL*Loader-275: Data is in control file but "INFILE *" has not been specified.
Commit point reached - logical record count 64And here is the content of the log file:
SQL*Loader: Release 11.2.0.3.0 - Production on Mon Jun 11 14:13:43 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
SQL*Loader-275: Data is in control file but "INFILE *" has not been specified.
Control File: sample.ctl
Data File: sample.xml
Bad File: sample.bad
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
Table MYTABLE, loaded from every logical record.
Insert option in effect for this table: APPEND
Column Name Position Len Term Encl Datatype
FILENAME CONSTANT
Value is 'sample.xml'
CONTENT DERIVED * EOF CHARACTER
Dynamic LOBFILE. Filename in field FILENAME
Record 2: Rejected - Error on table MYTABLE.
ORA-00001: unique constraint (PK_MYTABLE) violated
Record 3: Rejected - Error on table MYTABLE.
ORA-00001: unique constraint (PK_MYTABLE) violated
Record 4: Rejected - Error on table MYTABLE.
ORA-00001: unique constraint (PK_MYTABLE) violated
Record 5: Rejected - Error on table MYTABLE.
ORA-00001: unique constraint (PK_MYTABLE) violated
and so on...
Record 52: Rejected - Error on table MYTABLE.
ORA-00001: unique constraint (PK_MYTABLE) violated
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table MYTABLE:
1 Row successfully loaded.
51 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 1664 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 64
Total logical records rejected: 51
Total logical records discarded: 0
Run began on Mon Jun 11 14:13:43 2012
Run ended on Mon Jun 11 14:13:43 2012
Elapsed time was: 00:00:00.23
CPU time was: 7586:56:08.38It seems that the control file try to insert as many rows than the number of lines in the file sample.xml !!!
So, I cannot check if load is done correctly since return code is allways 2!
Is it the correct way to solve my problem ?
What can I do to get it better ?Another question !
Here is an other way of doing it.
#!/bin/ksh
FILEPATH=./data/sample.xml
FILENAME=$(basename ${FILEPATH})
CTLFILE=load_data.ctl
cat > ${CTLFILE} <<EOF
LOAD DATA
INFILE *
INTO TABLE mytable APPEND
filename CONSTANT "${FILEPATH}",
created "SYSDATE",
content LOBFILE (filename) TERMINATED BY EOF
BEGINDATA
${FILEPATH}
EOF
sqlldr scott/tiger@mydb control=${CTLFILE}
rc=$?
echo "Return code: $rc."I've tested this script, it's okay.
Now I want to store the basename of the file : ${FILENAME}.
How can I do that ?
The problem is that I can no more write "LOBFILE (filename)" because it does not point to the correct path of the file !!!
Someone can help me please ?
Thanks.
Maybe you are looking for
-
How do I transfer voice memos from my macbook to my iphone?
I had to back up all of my information from my phone to my macbook as I needed to restore my iphone to factory settings and would lose all my information. Now I want my voice memos back on my iphone and I am unable to sync them onto my phone. Not eve
-
HT204406 how do I get the playlists in Itunes match to show on my Iphone
I've been reading through all the articles and trying as many solutions as people have offered with no results. Itunes Playlists are showing on my macbook and my Imac, but I cannot edit which playlists are showing on my Iphone 4S, I connected it by U
-
Add another field in the screens of Transaction MB21/MB22/MB23
Dear all, We have a requirement to add an extra field BDZTP(Time that reservation quantity required) while creating, changing or displaying reservations manually. We are currently using SAP 4.6c. I tried looking for screen exits and BADIs for the tra
-
It should be possible to close a thread without saying it's answered.
It should be possible to close a thread without saying it's answered. Maybe closed - resolved or closed - not resolved. Edited by: Mike Angelastro on Jan 29, 2010 6:43 AM
-
Hi Gurus, By mistake I have deleted the data in production from some data targets all the data is still available in PSA (for 9 months). This happened in two data targets from two data sources. No. of records are not even 1 million. Total no. of requ