Portlets and Primary Keys
I'm just learning to use Portal.
So I have this small employee database. I have three tables: Person, Location and Salary. The primary Key is person_id. I'm setting up the page so the the Person Table is displayed in a portlet at the top of the page and then all the other tables are in Tabs. So I have a tab for Location and another Tab for Salary.
What I want to do is to have the person_id passed down to location and salary. How do I do that?
Also person_id is my foreign key in the location and salary tables.
Similar Messages
-
Difference between Unique key and Primary key(other than normal difference)
Hello,
1).Can any one tell me any other difference between Unique key and Primary key other than it having NULLs.
2). What is the difference the words 'DISTINCT' and 'UNIQUE' in a sql query.
Thanks in advance.Hi
If you don't believe me than see the documentation in
OTN.
Ott Karesz
http://www.trendo-kft.hu
SQL> create table scott.tbl_clob
2 (sss CLOB)
3 /
Tabelle wurde angelegt.
SQL> insert into scott.tbl_clob values('wrwrwrw')
2 /
1 Zeile wurde erstellt.
SQL> insert into scott.tbl_clob values('wrwrwrw')
2 /
1 Zeile wurde erstellt.
SQL> select distinct sss from scott.tbl_clob
2 /
select distinct sss from scott.tbl_clob
FEHLER in Zeile 1:
ORA-00932: nicht übereinstimmende Datentypen
SQL> select unique sss from scott.tbl_clob
2 /
select unique sss from scott.tbl_clob
FEHLER in Zeile 1:
ORA-00932: nicht übereinstimmende Datentypen
SQL> select distinct to_char(sss) from scott.tbl_clob
2 /
TO_CHAR(SSS)
wrwrwrw
SQL> select unique to_char(sss) from scott.tbl_clob
2 /
TO_CHAR(SSS)
wrwrwrw
SQL> -
Dynamic SQL Joining between tables and Primary keys being configured within master tables
Team , Thanks for your help in advance !
I'm looking out to code a dynamic SQL which should refer Master tables for table names and Primary keys and then Join for insertion into target tables .
EG:
INSERT INTO HUB.dbo.lp_order
SELECT *
FROM del.dbo.lp_order t1
where not exists ( select *
from hub.dbo.lp_order tw
where t1.order_id = t2.order_id )
SET @rows = @@ROWCOUNT
PRINT 'Table: lp_order; Inserted Records: '+ Cast(@rows AS VARCHAR)
-- Please note Databse names are going to remain the same but table names and join conditions on keys
-- should vary for each table(s) being configured in master tables
Sample of Master configuration tables with table info and PK Info :
Table Info
Table_info_ID Table_Name
1 lp_order
7 lp__transition_record
Table_PK_Info
Table_PK_Info_ID Table_info_ID PK_Column_Name
2 1 order_id
8 7 transition_record_id
There can be more than one join condition for each table
Thanks you !
Rajkumar YeluguHi Rajkumar,
It is glad to hear that you figured the question out by yourself.
There's a flaw with your while loop in your sample code, just in case you hadn't noticed that, please see below.
--In this case, it goes to infinite loop
DECLARE @T TABLE(ID INT)
INSERT INTO @T VALUES(1),(3),(2)
DECLARE @ID INT
SELECT @ID = MIN(ID) FROM @T
WHILE @ID IS NOT NULL
PRINT @ID
SELECT @ID =ID FROM @T WHERE ID > @ID
So a cursor would be the appropriate option in your case, please reference below.
DECLARE @Table_Info TABLE
Table_info_ID INT,
Table_Name VARCHAR(99)
INSERT INTO @Table_Info VALUES(1,'lp_order'),(7,'lp__transition_record');
DECLARE @Table_PK_Info TABLE
Table_PK_Info_ID INT,
Table_info_ID INT,
PK_Column_Name VARCHAR(99)
INSERT INTO @Table_PK_Info VALUES(2,1,'order_id'),(8,7,'transition_record_id'),(3,1,'order_id2')
DECLARE @SQL NVarchar(MAX),
@ID INT,
@Table_Name VARCHAR(20),
@whereCondition VARCHAR(99)
DECLARE cur_Tabel_Info CURSOR
FOR SELECT Table_info_ID,Table_Name FROM @Table_Info
OPEN cur_Tabel_Info
FETCH NEXT FROM cur_Tabel_Info
INTO @ID, @Table_Name
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @whereCondition =ISNULL(@whereCondition+' AND ','') +'t1.'+PK_Column_Name+'='+'t2.'+PK_Column_Name FROM @Table_PK_Info WHERE Table_info_ID=@ID
SET @SQL = 'INSERT INTO hub.dbo.'+@Table_Name+'
SELECT * FROM del.dbo.'+@Table_Name+' AS T1
WHERE NOT EXISTS (
SELECT *
FROM hub.dbo.'+@Table_Name+' AS T2
WHERE '+@whereCondition+')'
SELECT @SQL
--EXEC(@SQL)
SET @whereCondition = NULL
FETCH NEXT FROM cur_Tabel_Info
INTO @ID, @Table_Name
END
Supposing you had noticed and fixed the flaw, your answer sharing is always welcome.
If you have any question, feel free to let me know.
Eric Zhang
TechNet Community Support -
column with unique constraint + not null constraint = primary key! (to some extent) Is it correct?
I invite your ideashttp://www.techonthenet.com/oracle/unique.php
http://www.allapplabs.com/interview_questions/db_interview_questions.htm#q13
Difference between Unique key and Primary key(other than normal difference) -
Table and Primary Key Case Sensitive in Automated Row Fetch
Why are the table and primary key fields case sensitive in the Automated Row Fetch process? I'm debugging an error in this process and I'm not sure what the case should be.
Russ - It's a defect of sorts. Use upper case in the process definition. I don't think these processes will work with tables or columns whose names are not known to the database as upper-case values, i.e., defined using double-quoted non-upper case.
Scott -
Logical standby and Primary keys
Hi All,
Why primary keys are essential for creating logical standby database? I have created a logical standby database on testing basis without having primary keys on most of the tables and it's working fine. I have not event put my main DB in force logging mode.I have not event put my main DB in force logging mode. This is because, redo log files or standby redo logfiles transforms into set of sql statements to update logical standby.
Have you done any DML operations with nologging options and do you notice any errors in the alert.log? I just curious to know.
But I wanted to know that, while system tablespace in hot backup mode,In the absence of both a primary key and a nonnull unique constraint/index, all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. In other words, all columns except those with the following types are logged: LONG, LOB, LONG RAW, object type, and collections.
Jaffar -
What is the diffrence between Row id and primary key ?
dear all
my question is about creating materialized views parameters (With Rowid and
With Primary kry)
my master table contains a primary key
and i created my materialized view as follow:
CREATE MATERIALIZED VIEW LV_BULLETIN_MV
TABLESPACE USERS
NOCACHE
LOGGING
NOCOMPRESS
NOPARALLEL
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
AS
SELECT
BCODE ID, BTYPE BTYPE_ID,
BDATE THE_DATE,SYMBOL_CODE STOCK_CODE,
BHEAD DESC_E, BHEADARB DESC_A,
BMSG TEXT_E, BMSGARB TEXT_A,
BURL URL, BTIME THE_TIME
FROM BULLETIN@egid_sefit;
I need to know is there a diffrence between using (with row id) and (with primary key) on the performance of the query?Hi again,
fast refreshing complex views based on rowids, according to the previous subject.
(You're example shows that) are not possible.
Complex remote (replication) snapshots cannot be based on Rowid too.
for 10.1
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10759/statements_6002.htm#sthref5054
for 10.2
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6002.htm#sthref6873
So I guess (didn't check it) that this applies ONLY to replication snapshots.
This is not documented clearly though (documentation bug ?!)
Documentation states that the following is generally not possible with Rowid MVIEWS:
Distinct or aggregate functions
GROUP BY or CONNECT BY clauses
Subqueries
Joins
Set operations
Rowid materialized views are not eligible for fast refresh after a master table reorganization until a complete refresh has been performed.
The main purpose of my statements was to try to give a few tips how to avoid common problems with this complex subject, like for example: being able to CREATE an MVIEW with fast refresh clause does not really guarantee that it will refresh fast in the long run (reorganisation, partition changes) if ROWID based, further the rowid mviews have limitations according to the documentation (no group by, no connect by, link see above) plus fast refresh means only to use filter columnns of the mview logs, plus for aggregates you need additional count (*) pseudo columns.
kind regards
Karsten -
Problem with type conversion and primary key during row fetch
[Note, this error occurs in Oracle XE, APEX 2.1.0.00.39]
I have been having a problem with Automatic Row Fetch:
ORA-01460: unimplemented or unreasonable conversion requested
So in an effort to resolve this, I created a PL/SQL process with the query below and was able to isolate the same error. The problem seems to come from one of the primary keys being a "number" type (APP_ID). The error occurs on the line:
where u.app_id=:P5_APP_ID
I have tried the following variations on this line in an effort to resolve this, but all generate the same error:
1) where to_char(u.app_id) = :P5_APP_ID
2) where u.app_id = to_number(:P5_APP_ID)
3) where to_char(u.app_id) = to_char(:P5_APP_ID)
I've also tried the laternate syntax "&__." and "#__#", but these don't function in the Source field and show up as syntax errors.
Any suggestions are welcome.
begin
for r in (
select app_name, apptype, appcreator, appurl
from application_users u, application_info i
where u.app_id=:P5_APP_ID
and i.app_id=u.app_id
and u.username=:P5_USERNAME)
loop
begin
:P5_APP_NAME := r.app_name;
:P5_APPURL := r.appurl;
exception
when others then
raise_application_error(-20000,'In Loop Failure',true);
end;
end loop;
exception
when others then
raise_application_error(-20000,'Out of Loop Failure',true);
end;
Thanks in advance,
BarneyI found a prior post referencing a similar issue and it was solved by using the "v(__)" syntax. This did resolve my issue, however, I have a quick follow-on which I'm hoping someone can answer quickly...
Since the "v(__)" syntax won't work for the Automatic Row Fetch (at least to the best of my knowledge), I have to do a manual process. However, the manual query as shown above doesn't actually populate any of the form values through the bind variables. They all remain at their cached values from prior data entry on the form.
Is using the bind variables for assignment incorrect, or is there something that must be done to get the updates to show up in the form (ordering of processes, etc.).
My manual process is running in the Load: Before Header state so I would have expected it to update all of the fields.
Thanks in advance,
Barney -
Null in Composite Primary Key and "Primary keys must not contain null"
Hello all.
I'm a newbie concerning to JPA/EJB3, but I was wondering if toplinks doesn't support composite primary keys with null in some field (something perfectly right in any RDBMS).
I used JDeveloper (I'm using Oracle 10g database and JDeveloper 10.1.3.2.) wizards to generate JPA classes and I checked out generated files (with annotations), so they should be right (by the way, other O-R mappings for my model are working right, but this one).
I'm getting the next error:
Exception Description: The primary key read from the row [DatabaseRecord(
TSUBGRUPOSLDI.CD_GRUP => 01
TSUBGRUPOSLDI.CD_SUBGRUP => null
TSUBGRUPOSLDI.CG_POBL => 058
TSUBGRUPOSLDI.CG_PROV => 28
TSUBGRUPOSLDI.DSCR => Sanidad)] during the execution of the query was detected to be null. Primary keys must not contain null.
Compound primary key is (CD_GRUP, CD_SUBGRUP). No foreign keys, no joins (only a NamedQuery: "select o from ..."). It's the simplest case!
I checked out that everything runs ok if there's no "null" value in CD_SUBGRUP.
After some research (this and other forums) I'm beginning to believe that it's not supported, but not sure.
Am I doing sth wrong? If not, what is the reason to not support this? Will it be supported in the future?
Thanks in advance.Null is a special value and in many databases is not comparable to another null value (hence the isNull operator) and may pose problems when used to uniquely identify an Entity. TopLink does not support null values within a composite PK. As the nullable column is most likely not designated as a PK within your database table (many databases do not allow this) I recommend updating the Entity PKs to match that of the database.
--Gordon -
Diff b/w surragate keys and primary keys
Can any one of u pls explain the difference between primary key and surragate keys,pls explain me with simple example
Hi rajesh,
surrogate key: For every master data record, system create a SID (Surrogate ID). If that particular master data record is used in infocube, system store the SID corresponding to that master data in dimension table and the dimension id of the dimension table is stored in infocube Fact 'F' table.
Primary key: while creating a table we will select unique field and create as primary key..for eg empid acts as a unique key (primary key) for emp table. through this primary key we can identify the table to get the data from the table.
i hope this helps u.
regards
anil -
Problems with database modelling and primary keys
Hi,
I use JDeveloper exclusively for data modelling and generating the sql to build my db.
Its good but i have found a bug that can be rather annoying:
If i change the primary key of a table, the change does not seem to register within the model. So when i try and create a foreign key relationship to another table, the type is of the old type (say i went from varchar2 to number).
The only way around this is to delete the table and create it again. This also causes problems because the db generation wizard will then refuse to open properly (because it does not know that a table has been deleted).
Is this a known issue?
Cheers
RakeshIs this forum meant for reporting bugs or not? This issue i have stated is repeatable and in the current version.
R -
Display the foreign keys and primary keys in hirarchy qeury please ?
Any sql query to get the following columns:
I should display all the master table names,it's primary key field and its foregin key field,the name of the foreignkey'table.
This should be displayed in hirarchy i.e. it should start from the top level table like upside down tree.
can we query the database like this
thanx in advance
prasanth a.s.hi there,
Please can any body help in this query!
Thanxs in advance
prasanth a.s. -
Best practices with sequences and primary keys
We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?>
I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
>
Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
(which now has a value LATER than the value used by session 1) in the INSERT statement.
3. at time t5 session 2 COMMITs.
4. at time t7 session 1 COMMITs.
Tell us: which row was added FIRST?
If you extract data at time t4 you won't see ANY of those rows above since none were committed.
If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
About the best you, the user (i.e. not ORACLE the superuser), can do is to
1. create the table with ROWDEPENDENCIES
2. force delayed-block cleanout prior to selecting data
3. use ORA_ROWSCN to determine the order that rows were inserted or modified
As luck would have it there is a thread discussing just that in the Database - General forum here:
ORA_ROWSCN keeps increasing without any DML -
Skip_unusable_indexes and primary keys
Hi,
if have data for a big table splitted in multipe files which i want to load via sqlldr all in the same table as fast as possible. The indexes should be rebuild after the last file was loaded.
So I use SKIP_UNUSABLE_INDEXES=TRUE,SKIP_INDEX_MAINTENANCE=TRUE
I thougt sqlldr would proceed if an index is in unusable state. But if the index is of a primary key, it stops anyway with
SQL*Loader-951: Error calling once/load initialization
ORA-02373: Error parsing insert statement for table ....
ORA-01502: index 'MX_MARKET_DATA.MDB_PK' or partition of such index is in unusable state
in my workaround I rebuild the pk after every processed file. I could also merge the files to one single file in advance. But I would rather try to load some files in parallel.
Is there a way to load and skip the pk?
BR
TobiasHi,
>ORA-01502: index 'MX_MARKET_DATA.MDB_PK' or partition of such index is in unusable state
Please check following link:
ORA-01502: index or partition of such index is in usable state tips -
Recreate indexes,triggers and primary keys.
Hi All,
We have done an import of few table from MS SQL Server to oracle. But we couldnt see the indexes, triggers, primary key, foreign key of those tables.
Could you please let us know how to recreate the primary key, foreign key, indexes, triggers etc of those tables.
Pl suggest.
Thanks in advance!
Regards,
Vidhya>
We have done an import of few table from MS SQL Server to oracle
>
What does that mean exactly? How did you do an import from sql server to Oracle?
Post your 4 digit Oracle version (result of SELECT * FROM V$VERSION) and the exact steps you took to 'import'.
Maybe you are looking for
-
Purchases not appearing in "Purchased" under "Updates" in App Store
I have ios 5.0 on an iphone 4. When I click the App Store icon and then "Updates" at the bottom of the screen and then click "Purchased", nothing shows up. All my apps had transfered over successfully during the migration to ios 5 from ios 4.3. ho
-
How can I initiate the saving of a site password?
I want to be able to initiate the saving of passwords for sites. At the moment sometimes Firefox asks if I want a password saved and sometimes not. How can I start the saving process when Firefox doesn't ask?
-
Stuck on a warning dealing with NSOperations.
thephotoQueue = [[[NSOperationQueue alloc] init] autorelease]; [thephotoQueue setMaxConcurrentOperationCount:1]; NSInteger i; for (i = 0; i <=kNumImages -1; i++) NSInvocationOperation* thephotoOp = [[NSInvocationOperation alloc] initWithTarget:self s
-
Solution Manager and external Central Monitoring system
We have a Solution Manager system SOL and a Central Monitoring system CEN. The monitored system SDS is configured to report CCMS data to CEN and I wanted SOL to be able to access this data. After configuring SDS to CEN, CEN that the following entry
-
Question for upgrade WLS6.1 to WLS7.0
Hi, Can I used back the WLS6.1's license keys to WLS7.0 Otherwise, shall I regenerated the new license keys for WLS7.0 from BEA ?