SQL: Find table with max no. of rows
I have a table containing list of table names for each owner; as
## Table: db_tables
OWNER TABLE_NAME
a ta_1
a ta_2
a ta_3
b tb_1
b tb_2
c tc_1
Now, i want to know the table with max. no. of rows for each owner
Plz....can anyone gimme a solution for the above ......
Assuming 10g and above:
SQL> SELECT owner,
MAX(table_name) KEEP (DENSE_RANK FIRST ORDER BY XMLQUERY (t RETURNING CONTENT).getnumberval() DESC) table_name,
MAX(XMLQUERY (t RETURNING CONTENT).getnumberval()) cnt
FROM (SELECT owner,table_name, 'count(ora:view("' || table_name || '"))' t
FROM all_tables
WHERE owner IN ('MICHAEL','SCOTT'))
GROUP BY owner
OWNER TABLE_NAME CNT
MICHAEL SERVICE_ZIP 1000000
SCOTT EMP 14
2 rows selected.
Similar Messages
-
How can I implement the equivilent of a temporary table with "on commit delete rows"?
hi,
I have triggers on several tables. During a transaction, I need to gather information from all of them, and once one of the triggers has all the information, it creates some data. I Can't rely on the order of the triggers.
In Oracle and DB2, I'm using temporary tables with "ON COMMIT DELETE ROWS" to gather the information - They fit perfectly to the situation since I don't want any information to be passed between different transactions.
In SQL Server, there are local temporary tables and global. Local temp tables don't work for me since apparently they get deleted at the end of the trigger. Global tables keep the data between transactions.
I could use global tables and add some field that identifies the transaction, and in each access to these tables join by this field, but didn't find how to get some unique identifier for the transaction. @@SPID is the session, and sys.dm_tran_current_transaction
is not accessible by the user I'm supposed to work with.
Also with global tables, I can't just wipe data when "operation is done" since at the triggers level I cannot identify when the operation was done, transaction was committed and no other triggers are expected to fire.
Any idea which construct I could use to acheive the above - passing information between different triggers in the same transaction, while keeping the data visible to the current transaction?
(I saw similar questions but didn't see an adequate answer, sorry if posting something that was already asked).
Thanks!This is the scenario: If changes (CRUD) happen to both TableA and TableB, then log some info to TableC. Logic looks something like this:
Create Trigger TableA_C After Insert on TableA {
If info in temp tables available from TableB
Write info to TableC
else
Write to temp tables info from TableA
Create Trigger TableB_C After Insert on TableB {
If info in temp tables available from TableA
Write info to TableC
else
Write to temp tables info from TableB
So each trigger needs info from the other table, and once everything is available, info to TableC is written. Info is only from the current transaction.
Order of the triggers is not defined. Also there's no gurantee that both triggers would fire - changes can happen only to TableA / B and in that case I don't want to write anything to TableC.
The part that gets and sets info to temp table is implemented as temp tables with "on commit delete rows" in DB2 / Oracle.
What do you think? As I've mentioned, I could use global temp tables with a field that would identify the transaction, but didn't find something like that in SQL Server. And, the lifespan of local temp tables is too short. -
Find - tables with largest number of records?
Hi,
I need to find tables with largest number of records. Any transaction show this details?
aRsGo to transaction DB02, then click on the button that reads, "Space Statistics", the dialog box, click ok, leave the "*" for all tables, In the next screen put your cursor in the appropriate column labeled as Rows and click the sort button. Now you will see your biggest tables at the top of the list.
Regards,
Rich Heilman -
Migrating CMP EJB mapped to SQL Server table with identity column from WL
Hi,
I want to migrate an application from Weblogic to SAP Netweaver 2004s (7.0). We had successfully migrated this application to an earlier version of Netweaver. I have a number of CMP EJBs which are mapped to SQL Server tables with the PK as identity columns(autogenerated by SQL Server). I am having difficulty mapping the same in persistant.xml. This scenario works perfectly well in Weblogic and worked by using ejb-pk in an earlier version of Netweaver.
Please let me know how to proceed.
-SekharI suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
Arthur
MyBlog
Twitter -
EXP 11.2.0.1 export tables with at leat 1 row
I have oracle db 11.2.0.1 and user the export utility to export
tables, and row dati from the database to a file dmp.
I have notice that mow it extract the table with at leat 1 row
and do not export table empty
and so when i execute an import i lost a lot of tables...
Very dangerous ....
There is and explanations ???You may get this effect in release 11.2.0.1 if the table has no segment. You probably have deferred_segment_creation=true. This query may help identify the tables without segments:
select table_name from user_tables
minus
select segment_name from user_segments where segment_type='TABLE';
(you will of course have to modify the query to handle more complex table structures).
Data Pump does not have this problem.
John Watson
Oracle Certified Master DBA -
Moving Access table with an autonumber key to SQL Server table with an identity key
I have an SSIS package that is moving data from an Access 2010 database to a SQL Server 2008 R2 database. Two of the tables that I am migrating have identity keys in the SQL Server tables and I need to be able to move the autonumber keys to the SQL
Server tables. I am executing a SQL Script to set the IDENTITY_INSERT ON before I execute the Data Flow task moving the data and then execute a SQL Script to set the IDENTITY_INSERT OFF after executing the Data Flow task.
It is failing with an error that says:
An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was
done.".
Error: 0xC020901C at PGAccountContractDetail, PGAccountContractDetail [208]: There was an error with input column "ID" (246) on input "OLE DB Destination Input" (221). The column status returned was: "User does not have permission to
write to this column.".
Error: 0xC0209029 at PGAccountContractDetail, PGAccountContractDetail [208]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "input "OLE DB Destination Input" (221)" failed because error code 0xC020907C occurred, and the
error row disposition on "input "OLE DB Destination Input" (221)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information
about the failure.
Error: 0xC0047022 at PGAccountContractDetail, SSIS.Pipeline: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "PGAccountContractDetail" (208) failed with error code 0xC0209029 while processing input "OLE DB
Destination Input" (221). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted
before this with more information about the failure.
Any ideas on what is causing this error? I am thinking it is the identity key in SQL Server that is not allowing the update. But, I do not understand why if I set IDENTITY_INSERT ON.
Thanks in advance for any help/guidance provided.I suspect it is the security as specified in the message. E.g .your DBA set the ID columns so no user can override values in it.
And I suggest you 1st put the data into a staging table, then push it to the destination, this does not resolve the issue, but ensures better processing.
Arthur
MyBlog
Twitter -
How to find table with colum that not support by data pump network_link
Hi Experts,
We try to import a database to new DB by data pump network_link.
as oracle statement, Tables with columns that are object types are not supported in a network export. An ORA-22804 error will be generated and the export will move on to the next table. To work around this restriction, you can manually create the dependent object types within the database from which the export is being run.
My question, how to find these tables with colum that that are object types are not supported in a network export.
We have LOB object and oracle spital SDO_GEOMETRY object type. our database size is about 300G. nornally exp will takes 30 hours.
We try to use data pump with network_link to speed export process.
How do we fix oracle spital users type SDO_GEOMETRY issue during data pump?
our system is 32 bit window 2003 and 10GR2 database.
Thanks
Jim
Edited by: user589812 on Nov 3, 2009 12:59 PMHi,
I remember there being issues with sdo_geometry and DataPump. You may want to contact oracle support with this issue.
Dean -
Tables with buttons to add rows
Morning
I am having trouble with a table that has buttons to add, show and hide rows. There are 2 rows in my table which I need to be able to add depending on which button is clicked. I've managed to get the first button to add an instance using (Table.Row1.instanceManager.addInstance(1);) but I can't get row 2 to add to the bottom of the table with a button. I've tried to amend the script to fit the second row but it doesn't work.
Table.Row2.instanceManager.addInstance(2);
I'd appreciate some help
I can send a sample if need be.
Many thanks
BenThe correct syntax is addInstance(1) (or addInstance(true)).
As long as the row is set to repeat (Object>Binding>Repeat Row for Each Data Item) it should work. If the row doesn't exist yet then try using the underscore shortcut for the Instance Manager: Table._Row2.addInstance(1); -
Table with a sub-heading row - under only two columns
Hi all,
Using TCS2 on Win 7 64-bit. This is likely a silly question but I can't for the life of me figure it out.
I want to create a table as follows:
-one table, with 4 columns and 5 rows
-i would like the header row to span all 4 columns, but only be divided into 3 pieces, so that in the first row beneath the header, I can further sub-divide the farthest 2 right-columns
I've attached a little screen diagram to try and give a sense of what I'm looking to do.
I'm sure this is a simple thing but I just can't figure it out!
Any help is greatly appreciated.
Thanks,
Adrianaadrianaharper wrote:
Hi all,
Using TCS2 on Win 7 64-bit. This is likely a silly question but I can't for the life of me figure it out.
I want to create a table as follows:
-one table, with 4 columns and 5 rows
-i would like the header row to span all 4 columns, but only be divided into 3 pieces, so that in the first row beneath the header, I can further sub-divide the farthest 2 right-columns
I've attached a little screen diagram to try and give a sense of what I'm looking to do.
I'm sure this is a simple thing but I just can't figure it out!
Any help is greatly appreciated.
Thanks,
Adriana
Select the two right-most cells in the top header row and choose Table > Straddle.
HTH
Regards,
Peter
Peter Gold
KnowHow ProServices -
How to Capture a Table with large number of Rows in Web UI Test?
HI,
Is there any possibility to capture a DOM Tabe with large number of Rows (say more than 100) in Web UI Test?
Or is there any bug?Hi,
You can try following code to capture the table values.
To store the table values in CSV :
*web.table( xpath_of_table ).exportToCSVFile("D:\exporttable.csv", true);*
TO store the table values in a string:
*String tblValues=web.table( xpath_of_table ).exportToCSVString();*
info(tblValues);
Thanks
-POPS -
How to export a table with half a million rows?
I need to export a table that has 535,000 rows. I tried to export to Excel and it exported only 65,535 rows. I tried to export to a text file and it said it was using the clipboard (?) and 65,000 rows was the maximum. Surely there has to be a way to export
the entire table. I've been able to import much bigger csv files than this, millions of rows.What version of Access are you using? Are you attempting to copy and paste records or are you using Access' export functionality from the menu/ribbon? I'm using Access 2010 and just exported a million record table to both a text file and to Excel
(.xlsx format). Excel 2003 (using .xls 97-2003 format) does have a limit of 65,536 rows but the later .xlsx format does not.
-Bruce -
Migrating from Sql Server tables with column name starting with integer
hi,
i'm trying to migrate a database from sqlserver but there are a lot of tables with column names starting with integer ex: *8420_SubsStatusPolicy*
i want to make an offline migration so when i create the scripts these column are created with the same name.
can we create rules, so when a column like this is going to be migrated, to append a character in front of it?
when i use Copy to Oracle option it renames it by default to A8420_SubsStatusPolicy
Edited by: user8999602 on Apr 20, 2012 1:05 PMHi,
Oracle doesn't allow object names to start with an integer. I'll check to see what happens during a migration about changing names as I haven't come across this before.
Regards,
Mike -
Select query on a table with 13 million of rows
Hi guys,
I have been trying to perform a select query on a table which has 13 millions of entries however it took around 58 min to complete.
The table has 8 columns with 4 Primary keys looks like below:
(PK) SegmentID > INT
(PK) IPAddress > VARCHAR (45)
MAC Address > VARCHAR (45)
(PK) Application Name > VARCHAR (45)
Total Bytes > INT
Dates > VARCHAR (45)
Times > VARCHAR (45)
(PK) DateTime > DATETIME
The sql query format is :
select ipaddress, macaddress, sum(totalbytes), applicationname , dates,
times from appstat where segmentid = 1 and datetime between '2011-01-03
15:00:00.0' and '2011-01-04 15:00:00.0' group by ipaddress,
applicationname order by applicationname, sum(totalbytes) desc
Is there a way I can improve this query to be faster (through my.conf or any other method)?
Any feedback is welcomed.
Thank you.
MusTolls wrote:
What db is this?
You never said.
Anyway, it looks like it's using the Primary Key to find the correct rows.
Is that the correct number of rows returned?
5 million?
Sorted?I am using MySQL. By the way, the query time has been much more faster (22 sec) after I changed the configuration file (based on my-huge.cnf).
The number of rows returned is 7999 Rows
This is some portion of the my.cnf
# The MySQL server
[mysqld]
port = 3306
socket = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 800M
max_allowed_packet = 1M
table_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
log = /var/log/mysql.log
log-slow-queries = /var/log/mysqld.slow.log
long_query_time=10
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 6
Is there anything else I need to tune so it can be faster ?
Thanks a bunch.
Edited by: user578505 on Jan 17, 2011 6:47 PM -
Unable To Select From SQL Server table with more than 42 columns
I have set up a link between a Microsoft SQL Server 2003 database and an Oracle 9i database using Heterogeneous Services (HSODBC). It's working well with most of the schema I'm selecting from except for 3 tables. I don't know why. The common denominator between all the tables is that they all have at least 42 columns each, two have 42 columns, one has 56, and the other one, 66. Two of the tables are empty, one has almost 100k records, one has has 170k records. So I don't think the size of the table matters.
Is there a limitation on the number of table columns you can select from through a dblink? Even the following statement errors out:
select 1
from "Table_With_42_Cols"@sqlserver_db
The error message I get is:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message [Generic Connectivity Using ODBC]
ORA-02063: preceding 2 lines from sqlserver_db
Any assistance would be greatly appreciated. Thanks!Not a very efficient and space friendly design to do name-value pairs like that.
Other methods to consider is splitting those 1500 parameters up into groupings of similar parameters, and then have a table per group.
Another option would be to use "vertical table partitioning" (as oppose to the more standard horizontal partitionining provided by the Oracle partition option) - this can be achieved (kind of) in Oracle using clusters.
Sooner or later this name-value design is going to bite you hard. It has 1500 rows where there should be only 1 row. It is not scalable.. and as you're discovering, it is unnatural to use. I would rather change that table and design sooner than later. -
Uix table with multiple lines per row
Hi,
how can I design a table that has multiple lines for one row.
It should look like this
row1-cell1 row1-cell2
row1-cell3 row1-cell4
row2-cell1 row2-cell2
row2-cell3 row2-cell4
Thanks a lot.
ChristianChristian,
I'll scribble down a quick example off the top of my head...
<table data:tableData="${mydata}">
<column>
<stackLayout>
<link text="cell1"/>
<link text="cell3"/>
</stackLayout>
</column>
<column>
<stackLayout>
<link text="cell2"/>
<link text="cell4"/>
</stackLayout>
</column>
</table>
I recommend you just play around with UIX XML, and you'll find something that works.
Hope this helps,
Ryan
Maybe you are looking for
-
Sqlplus: Error while loading shared libraries: libsqlplus.so:
Hi The Error is as follows: sqlplus: Error while loading shared libraries: libsqlplus.so: cannot open shared object file: no such file or directory Installed R12 with RUP2. User Oracle already exists. I tried to create a user appsdev as follows: #add
-
Hey i lost my iphone how can i find it
hey i lost my iphone how can i find it
-
Frozen during installing updates unresponsive to any commands any ideas?
Screen frozen whilst installing updates unresponsive to any commands any ideas how to come out?
-
In captivate 3 I tried to edit the image in photoshop from library using Ctrl+u, After finishing editing i saved the image, but it is not automatically updated in the captivate slide, But the same function works very well in captivate 2. Please guide
-
I can't attach a document if it is open "in use by another application"
If I try to attach a word document in Windows Live Hotmail, it says the document is in use by another application. I have to close the application to attach it. This just started recently.