Table with a Long column
I had a problem, when i am creating a table with the SQL -->
"create table tab2 as (select * from tab1); "
i get an error --> ORA-00997: illegal use of LONG datatype
This i figured out happened because tab1 has a long column.
How to recreate a table with long column ?
null
You could use the COPY feature once your second table is created:
DROP TABLE one
DROP TABLE two
CREATE TABLE one
(n NUMBER
,l LONG)
INSERT INTO one
VALUES
(1
,'123'
CREATE TABLE two
(n NUMBER
,l LONG)
COPY FROM apps/apps@dblink TO apps/apps@dblink APPEND two USING SELECT * FROM one
SELECT * FROM one
SELECT * FROM two
null
Similar Messages
-
URGENT -- How to duplicate a table with a long column?
I am using a ddl script --
Create table ONE as (select * from TWO);
This gives error saying ONE has long column.
Any ideas.........
DarshanIf you can create the second table, you can populate it through the COPY function.
See the following testcase:
DROP TABLE one
DROP TABLE two
CREATE TABLE one
(n NUMBER
,l LONG)
INSERT INTO one
VALUES
(1
,'123'
CREATE TABLE two
(n NUMBER
,l LONG)
COPY FROM apps/apps@db TO apps/apps@db APPEND two USING SELECT * FROM one
SELECT * FROM one
SELECT * FROM two
null -
Importing a table with a BLOB column is taking too long
I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
1 - set buffer to 500 Mb
2 - pre-created the table and turned off logging
3 - set indexes=N
4 - set constraints=N
5 - I have 10 online redo logs with 200 MB each
6 - Even turned off logging at the database level with disablelogging = true
It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
For your info:
Computer: Sun v490 with 16 CPUs, solaris 10
memory: 10 Gigabytes
SGA: 4 GigabytesLegatti,
I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
Thanks for your reply. -
Hi,
I have a table with many LONG fields (28). So far, everythings works fine.
However, if I add another LONG field I cannot insert a dataset anymore
(29 LONG fields).
Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
Thanks in advance
Michael
appendix:
- Create and Insert command and error message
- MaxDB version and its parameters
Create and Insert command and error message
CREATE TABLE "DBA"."AZ_Z_TEST02"
"ZTB_ID" Integer NOT NULL,
"ZTB_NAMEOFREPORT" Char (400) ASCII DEFAULT '',
"ZTB_LONG_COMMENT" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_00" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_01" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_02" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_03" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_04" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_05" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_06" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_07" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_08" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_09" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_10" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_11" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_12" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_13" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_14" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_15" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_16" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_17" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_18" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_19" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_20" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_21" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_22" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_23" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_24" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_25" LONG ASCII DEFAULT '',
"ZTB_LONG_TEXTBLOCK_26" LONG ASCII DEFAULT '',
PRIMARY KEY ("ZTB_ID")
The insert command
INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
works fine. If I add the LONG field
"ZTB_LONG_TEXTBLOCK_27" LONG ASCII DEFAULT '',
the following error occurs:
Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
MaxDB version and its parameters
All db params given by
dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
are
KERNELVERSION KERNEL 7.5.0 BUILD 026-123-094-430
INSTANCE_TYPE OLTP
MCOD NO
RESTART_SHUTDOWN MANUAL
SERVERDBFOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 10
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 LOG_001
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 64000
DATA_VOLUME_NAME_0001 DAT_0001
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0001 64000
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 22
MULTIO_BLOCK_CNT 4
DELAYLOGWRITER 0
LOG_IO_QUEUE 50
RESTARTTIME 600
MAXCPU 1
MAXUSERTASKS 50
TRANSRGNS 8
TABRGNS 8
OMSREGIONS 0
OMSRGNS 25
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
ROWRGNS 8
MINSERVER_DESC 16
MAXSERVERTASKS 20
_MAXTRANS 288
MAXLOCKS 2880
LOCKSUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
USEASYNC_IO YES
IOPROCSPER_DEV 1
IOPROCSFOR_PRIO 1
USEIOPROCS_ONLY NO
IOPROCSSWITCH 2
LRU_FOR_SCAN NO
PAGESIZE 8192
PACKETSIZE 36864
MINREPLYSIZE 4096
MBLOCKDATA_SIZE 32768
MBLOCKQUAL_SIZE 16384
MBLOCKSTACK_SIZE 16384
MBLOCKSTRAT_SIZE 8192
WORKSTACKSIZE 16384
WORKDATASIZE 8192
CATCACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 1632
INIT_ALLOCATORSIZE 229376
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
TASKCLUSTER01 tw;al;ut;2000sv,100bup;10ev,10gc;
TASKCLUSTER02 ti,100dw;30000us;
TASKCLUSTER03 compress
MPRGN_QUEUE YES
MPRGN_DIRTY_READ NO
MPRGN_BUSY_WAIT NO
MPDISP_LOOPS 1
MPDISP_PRIO NO
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 0
MPRGN_PRIO NO
MAXRGN_REQUEST 300
PRIOBASE_U2U 100
PRIOBASE_IOC 80
PRIOBASE_RAV 80
PRIOBASE_REX 40
PRIOBASE_COM 10
PRIOFACTOR 80
DELAYCOMMIT NO
SVP1_CONV_FLUSH NO
MAXGARBAGECOLL 0
MAXTASKSTACK 1024
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
DWIO_AREA_SIZE 50
DWIO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
FBMLOW_IO_RATE 10
CACHE_SIZE 10000
DWLRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
DATACACHE_RGNS 8
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 11
SEQUENCE_CACHE 1
IDXFILELIST_SIZE 2048
SERVERDESC_CACHE 73
SERVERCMD_CACHE 21
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
READAHEADBLOBS 25
RUNDIRECTORY E:\_mp\u_v_dbs\EVERW_C5
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 0
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 648
EXTERNAL_DUMP_REQUEST NO
AKDUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
UTILITYPROTFILE dbm.utl
UTILITY_PROTSIZE 100
BACKUPHISTFILE dbm.knl
BACKUPMED_DEF dbm.mdf
MAXMESSAGE_FILES 0
EVENTALIVE_CYCLE 0
_SHAREDDYNDATA 10280
_SHAREDDYNPOOL 3607
USE_MEM_ENHANCE NO
MEM_ENHANCE_LIMIT 0
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-13 13:47:17
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
DIAGSEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 21333
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_JOIN_OUTER YES
JOIN_OPERATOR_IMPLEMENTATION IMPROVED
JOIN_TABLEBUFFER 128
OPTIMIZE_FETCH_REVERSE YES
SET_VOLUME_LOCK YES
SHAREDSQL NO
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FORBID_LOAD_BALANCING NO>
Lars Breddemann wrote:
> Hi Michael,
>
> this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
> Really.
>
> Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
> Anyhow, when I use
>
> insert into "AZ_Z_TEST02" values (87,'','','','','','','','','','','','','','','',''
> ,'','','','','','','','','','','','','','','','')
>
> it works fine.
It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
>
Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
>
> Now to the other errors:
> - 28 Long values per row?
> What the heck is wrong with the data design here?
> Honestly, you can save data up to 2 GB in a BLOB/CLOB.
> Currently, your data design allows 56 GB per row.
> Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
>
> - The "ZTB_NAMEOFREPORT" looks like something the users see -
> still there is no unique constraint preventing that you get 10000 of reports with the same name...
You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
(These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
>
- MaxDB 7.5 Build 26 ?? Where have you been the last years?
> Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
> With 7.6. I was not able to reproduce your issue at all.
The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
>
- Are you really putting your data into the DBA schema? Don't do that, ever.
> DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
> Create a user/schema for your application data and put your tables into that.
>
> KR Lars
In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
Michael -
I recently began using reminders in ical and downloaded the iPhone and iPad app. It works great. However, the reminders in ical on the COMPUTER is difficult to work with - thin long column on the right rather than in it's own screen. is there a reminders app for the COMPUTER that is more user friendly?
sgreenie,
If you are willing to wait, Apple - OS X Mountain Lion - Inspired by iPad. Made for the Mac explains that a "Reminders" application will be included in the next release of OS X. -
What index is suitable for a table with no unique columns and no primary key
alpha
beta
gamma
col1
col2
col3
100
1
-1
a
b
c
100
1
-2
d
e
f
101
1
-2
t
t
y
102
2
1
j
k
l
Sample data above and below is the dataype for each one of them
alpha datatype- string
beta datatype-integer
gamma datatype-integer
col1,col2,col3 are all string datatypes.
Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
beleive that creating clustered index having covering columns will be better.
What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records
what index is suitable for a table with no unique columns and primary key?
col1
col2
col3
MudassarMany thanks for your explanation .
When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns ,[beta],[gamma] ,[col1]
,[col2] ,[col3]
SELECT [alpha]
,[beta]
,[gamma]
,[col1]
,[col2]
,[col3]
FROM [TEST].[dbo].[Test]
where [alpha]='10100'
My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
Mudassar -
Cartesian of data from two tables with no matching columns
Hello,
I was wondering – what’s the best way to create a Cartesian of data from two tables with no matching columns in such a way, so that there will be only a single SQL query generated?
I am thinking about something like:
for $COUNTRY in ns0: COUNTRY ()
for $PROD in ns1:PROD()
return <Results>
<COUNTRY> {fn:data($COUNTRY/COUNTRY_NAME)} </COUNTRY>
<PROD> {fn:data($PROD/PROD_NAME)} </PROD>
</Results>
And the expected result is combination of all COUNTRY_NAMEs with all PROD_NAMEs.
What I’ve noticed when checking query plan is that DSP will execute two queries to have the results – one for COUNTRY_NAME and another one for PROD_NAME. Which in general results in not the best performance ;-)
What I’ve noticed also is that when I add something like:
where COUNTRY_NAME != PROD_NAME
everything is ok and there is only one query created (it's red in the Query plan, but still it's ok from my pov). Still it looks to me more like a workaround, not a real best approach. I may be wrong though...
So the question is – what’s the suggested approach for such queries?
Thanks,
Leszek
Edited by xnts at 11/19/2007 10:54 AMWhich in general results in not the best performanceI disagree. Only for two tables with very few rows, would a single sql statement give better performance.
Suppose there are 10,000 rows in each table - the cross-product will result in 100 million rows. Sounds like a bad idea. For this reason, DSP will not push a cross-product to a database. It will get the rows from each table in separate sql statements (retrieving only 20,000 rows) and then produce the cross-product itself.
If you want to execute sql with cross-products, you can create a sql-statement based dataservice. I recommend against doing so. -
How to create table with rows and columns in the layout mode?
One of my friends advised me to develop my whole site on the
layout mode as its better than the standard as he says
but I couldnot make an ordinary table with rows and columns
in th layout mode
is there any one who can tell me how to?
thanx alotYour friend is obviously not a reliable source of HTML
information.
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.dreamweavermx-templates.com
- Template Triage!
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
http://www.macromedia.com/support/search/
- Macromedia (MM) Technotes
==================
"Mr.Ghost" <[email protected]> wrote in
message
news:f060vi$npp$[email protected]..
> One of my friends advised me to develop my whole site on
the layout mode
> as its
> better than the standard as he says
> but I couldnot make an ordinary table with rows and
columns in th layout
> mode
> is there any one who can tell me how to?
> thanx alot
> -
Tabular Form based on table with lots of columns - how to avoid scrollbar?
Hi everybody,
I'm an old Forms and VERY new APEX user. My problem is the following: I have to migrate a form application to APEX.
The form is based on a table with lots of columns. In Forms you can spread the data over different tab pages.
How can I realize s.th similar in APEX? I definitely don't want to use a horizontal scroll bar...
Thanks in advance
HilkeIf the primary key is created by the user themselves (which is not recommended, you should have another ID which the user sees which would be the varchar2 and keep the primary key as is, the user really shouldn't ever edit the primary key) then all you need to do is make sure that the table is not populated with a primary key in the wizard and then make sure that you cannot insert a null into your varchar primary key text field.
IF you're doing it this way I would make a validation on the page which would run off a SQL Exists validation, something along the lines of
SELECT <primary key column>
FROM <your table>
WHERE upper(<primary key column>) = upper(<text field containing user input>);
and if it already exists, fire the validation claiming that it already exists and to come up with a new primary key.
Like I said if you really should have a primary key which the database refers to each individual record itself and then have an almost pseudo-primary key that the user can use. For example in the table it would look like this:
TABLE1
table_id (this is the primary key which you should NOT change)
user_table_id (this is the pretend primary key which the user can change)
other_columns
etc
etc
hope this helps in some way -
Difference between an XMLType table and a table with an XMLType column?
Hi all,
Still trying to get my mind around all this XML stuff.
Can someone concisely explain the difference between:
create table this_is_xmltype_tab of xmltype;and
create table this_is_tab_w_xmltpe_col(id number, document xmltype);What are the relative advantages and disadvantages of each approach? How do they really differ?
Thanks,
-MarkThere is another pointer Mark, that I realized when I was thinking about the differences...
If you would look up in the manual regarding "xdb:annotations" you would learn about a method using an XML Schema to generate out of the box your whole design in terms of physical layout and/or design principles. In my mind this should be the preferred solution if you are dealing with very complex XML Schema environments. Taking your XML Schema as your single point design layout, that during the actual implementation automatically generates and builds all your needed database objects and its physical requirements, has great advantages in points of design version management etc., but...
...it will create automatically an XMLType table (based on OR, Binary XML of "hybrid" storage principles, aka the ones that are XML Schema driven) and not AFAIK a XMLtype column structure: so as in "our" case a table with a id column and a xmltype column.
In principle you could relationally relate to this as:
+"I have created an EER diagram and a Physical diagram, I mix the content/info of those two into one diagram." "Then I _+execute+_ it in the database and the end result will be an database user/schema that has all the xxxx amount of physical objects I need, the way I want it to be...".+
...but it will be in the form of an XMLType table structure...
xdb:annotations can be used to create things like:
- enforce database/company naming conventions
- DOM validation enabled or not
- automatic IOT or BTree index creation (for instance in OR XMLType storage)
- sort search order enforced or not
- default tablenames and owners
- extra column or table property settings like for partitioning XML data
- database encoding/mapping used for SQL and binary storage
- avoid automatic creation of Oracle objects (tables/types/etc), for instance, via xdb:defaultTable="" annotations
- etc...
See here for more info: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#ADXDB4519
and / or for more detailed info:
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030452
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030995
http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#CHDCEBAG
... -
When I need a snapsht with a long column...
Hi everybody
where I work we have more than one database and sometime it is usefull to keep the structures of some objects in the different enveronments as much similar as we can.
I developed a procedure witch builds an sql script containing all the DDLs needed to make tables of the develop environment compatible with procedures of production environment.
To do so, the procedure, that runs on develop DB, must read very often some dictionary views of the production DB ( dba_tables,dba_tab_columns,dba_indexes,dba_ind_columns,dba_constraints and dba_cons_columns ). If this appends too frequently, execution time becomes very long, so I would like to keep a snapshot for each one of those views, in the develop DB in order to access the network just once, in the refresh step.
The problem appears with those views containg a long column, that I need to access in order to retrive the default values of a column and the check condition of a constraint.
It would be good any alternative but I would prefer something with an easy refresh procedure ( to give a sense for it, it does't have to be a job heavier than the main one ). Peraps I can't use data pump api because the production DB runs on oracle 8i.
Thanks ByeHi,
Try iMessaging your own iPhone and see if your account is actually logged in.
It can appear "enabled" and On line but sometimes it is not.
Also try sending from your iPhone (number) to your Apple ID and see if it gets to the Mac.
If it does not use the Sign Out button.
Then shut down the Mac
On restart add back the Apple ID to the iMessages account settings.
I also just checked My wife's iPhone as she popped up red.
It turned out her iPhone did not have Messages On.
10:00 pm Tuesday; January 28, 2014
iMac 2.5Ghz 5i 2011 (Mavericks 10.9)
G4/1GhzDual MDD (Leopard 10.5.8)
MacBookPro 2Gb (Snow Leopard 10.6.8)
Mac OS X (10.6.8),
Couple of iPhones and an iPad -
If I need a snapshot with a long column....
Hi everybody
where I work we have more than one database and sometime it is usefull to keep the structures of some objects in the different enveronments as much similar as we can.
I developed a procedure witch builds an sql script containing all the DDLs needed to make tables of the develop environment compatible with procedures of production environment.
To do so, the procedure, that runs on develop DB, must read very often some dictionary views of the production DB ( dba_tables,dba_tab_columns,dba_indexes,dba_ind_columns,dba_constraints and dba_cons_columns ). If this appends too frequently, execution time becomes very long, so I would like to keep a snapshot for each one of those views, in the develop DB in order to access the network just once, in the refresh step.
The problem appears with those views containg a long column, that I need to access in order to retrive the default values of a column and the check condition of a constraint.
It would be good any alternative but I would prefer something with an easy refresh procedure ( to give a sense for it, it does't have to be a job heavier than the main one ). Peraps I can't use data pump api because the production DB runs on oracle 8i.
Thanks ByeAfter you create a package, use Utilities menu item of KToolbar to convert the jad/jar file to .prc file. This prc file is your application that needs to be transferred to your Palm or Palm emulator. In order to run any such prc files, your Palm or emulator must have JVM installed on it. It is MIDP.prc file. It is inside the zipped "midp4palm1.0" file which you can download from Sun site.
-
Error when inserting in a table with an identity column
Hi,
I am new to Oracle SOA suite and ESB.
I have been through the Oracle training and have worked for about 2 months with the tooling.
We have a Database adabter that inserts data in 5 Tables with relations to each other.
Each table has his own not NULL Identity column.
When running/ testing the ESB service we get the error at the end of this post.
From this we learned that the Database adapter inserts the value NULL in the identity column.
We cannot find in the documentation how to get the database adabter to skip this first column and ignore it.
Is this possible within the wizard? Our impression is no
Is this possible somwhere else/
And if so How can we do this?
If anyone can help it would be greatly appreciated
Pepijn
Generic error.
oracle.tip.esb.server.common.exceptions.BusinessEventRejectionException: An unhandled exception has been thrown in the ESB system. The exception reported is: "org.collaxa.thirdparty.apache.wsif.WSIFException: esb:///ESB_Projects/GVB_PDI_PDI_Wegschrijven_Medewerkergegevens/testurv.wsdl [ testurv_ptt::insert(VastAdresCollection) ] - WSIF JCA Execute of operation 'insert' failed due to: DBWriteInteractionSpec Execute Failed Exception.
insert failed. Descriptor name: [testurv.VastAdres]. [Caused by: Cannot insert explicit value for identity column in table 'VastAdres' when IDENTITY_INSERT is set to OFF.]
; nested exception is:
ORABPEL-11616
DBWriteInteractionSpec Execute Failed Exception.
insert failed. Descriptor name: [testurv.VastAdres]. [Caused by: Cannot insert explicit value for identity column in table 'VastAdres' when IDENTITY_INSERT is set to OFF.]
Caused by Uitzondering [TOPLINK-4002] (Oracle TopLink - 10g Release 3 (10.1.3.3.0) (Build 070608)): oracle.toplink.exceptions.DatabaseException
Interne uitzondering: com.microsoft.sqlserver.jdbc.SQLServerException: Cannot insert explicit value for identity column in table 'VastAdres' when IDENTITY_INSERT is set to OFF.Foutcode: 544
Call:INSERT INTO dbo.VastAdres (ID, BeginDatum, Einddatum, Land, Plaats, Postcode, VolAdres) VALUES (?, ?, ?, ?, ?, ?, ?)
bind => [null, 1894-06-24 00:00:00.0, 1872-09-04 00:00:00.0, Nederland, Wijdewormer, 1456 NR, Oosterdwarsweg 8]
Query:InsertObjectQuery(<VastAdres null />).Hi,
Click on the resources tab in the ESB system/ Project to see the ESB system design and all the components in it.
Click on the Database adapter that you want to edit..and make the necesary changes..
Check this link.
http://download-uk.oracle.com/docs/cd/B31017_01/core.1013/b28764/esb008.htm for "6.8.2 How to Modify Adapter Services" section.
If you are calling a database procedure which inturn makes the insert, you will have to make changes in the database and you job would be much simpler. It seems there are limitations on what you can change in the Database adapter once it is created. Please check the link for further details.
Thanks,
Rajesh -
Tables with different-width columns
Whist creating a table with multiple rows and columns, I need some of the columns to be different widths on each row. I can do this easily in Word by simply selecting the cell and command-drag the vertical line at the left or right hand side.
When I do this in InDesign, the whole damn column moves. I do not want this.
Also - as in Word - I need to be able to calculate sums.
Why is none of this possible with such a powerful application like InDesign CC 2014? Seems like Microsoft Word kicks InDesign CC 2014's *** on this.I do not want InDesign to work like Word, oh no. I just designed a 200 page book in InDesign - it was a breeze. I shudder at the thought of having to do that in Word. The table functions of the application are limited. Here's the thing, I want to be able to layout a specialised table/form with my specialised design tool and be able to specify the sizes of columns, cells and rows as I choose, and change on the fly if necessary. My screwdriver drives screws in - and back out again as I choose. I can slide an individual column divider one way but not the other. It's a bit like having an all singing-all dancing Swiss Army knife that doesn't have the corkscrew, and your out on a picnic with a nice bottle of red in your basket.
A guy from Adobe remote-shared my screen, telling me what I wanted to achieve was possible. I showed him using Word how easy it was. He tried and failed in InDesign and conceded that it was in fact not possible. He gave me a link to Adobe for a feature request! -
How to create a concatenated index with a long column (Urgent!!)
We have a situation where we need to create a concatenated unique
index with one of the columns as a "long" datatype. Oracle does
not allow a lot of things with long columns.
Does anyone know if this is possible or if there is a way to get
around it.
All help is appreciated!!!!From the Oracle SQL Reference ...
"You cannot create an index on columns or attributes whose
type is user-defined, LONG, LONG RAW, LOB, or REF,
except that Oracle supports an index on REF type columns
or attributes that have been defined with a SCOPE clause."
Doesn't mention CLOB or BLOB types, so perhaps you
should consider using one of those types instead. I have a
feeling that the LONG type is now deprecated.
Maybe you are looking for
-
I just upgraded to the 3.0 OS, however upon the phones final restart Itunes prompts me with "Itunes cannot connect to "xxx" because it is locked with a password. You must enter the passcode on the iphone before it can be used with itunes." The proble
-
Reg:invoice correction req
Hi All, when im trying to create an invoice corr req with ref to invoice system is not allowing me to change the condition values. as the material and the target qty are in display mode. plz help me out in solving this issue. its bit urgent. Thanks &
-
How do I invite someone to an event on my calendar?
I want to invite someone to an event on my iPhone 4 calendar but the "invite" button doesnt appear when I create an event on my calendar the way my blackberry did.
-
Notification email for workflow
I have configured the smtp server and also the email transport and channels and the channel orginators.Emails are sent for subscription service and also 'send to' works fine. But the problem is that the notification emails are not sent to the approve
-
How, specifically, do I download the latest operating system for my Mac laptop?
How, specifically, do I download the latest operating system for my Mac laptop?