Picking up only committed rows
Hi, we run std 2012. I want to write a proc that when it starts running only picks up rows from a table that were committed any time prior to start of execution. Generally, the procs that insert to this table will wrap 1-n inserts inside a transaction
so consumers like my proc will pick up all or none of their "unit of work" inserts.
I believe my proc's query can be set based in one sql statement. So I think that already guarantees the goal of picking up only prior commits but am not sure. I think I'd like to take whatever steps are necessary to also minimize the
chance of my proc locking others and vice versa. Any input would be appreciated.
Assuming the default READ_COMMITTED isolation level behavior, a SELECT statement will return only rows from committed transactions. However, the statement could return rows committed after the statement started.
You can turn on the READ_COMMITTED_SNAPSHOT database option so that row versioning instead of locking is used to implement READ_COMMITTED isolation level. Only rows committed at the time the SELECT statement started will then be visible to the query.
Be aware that the READ_COMMITTED_SNAPSHOT database option adds an additional 14 bytes per row overhead and uses tempdb more heavily for the row version store.
Also consider applications that rely on locking/blocking behavior may be impacted with READ_COMMITTED_SNAPSHOT on. You'll need to use the SNAPSHOT isolation level for your query instead if that is the case.
Dan Guzman, SQL Server MVP, http://www.dbdelta.com
Similar Messages
-
How can I modify this script to return only certain rows of my mySQL table?
Hi there,
I have a php script that accesses a mySQL database and it was generated out of the Flex Builder wizard automatically. The script works great and there are no problems with it. It allows me to perform CRUD on a table by calling it from my Flex app. and it retrieves all the data and puts it into a nice MXML format.
My question, currently when I call "findAll" to retrieve all the data in the table, well, it retrieves ALL the rows in the table. That's fine, but my table is starting to grow really large with thousands of rows.
I want to modify this script so that I can pass a variable into it from Flex so that it only retrieves the rows that match the "$subscriber_id" variable that I pass. In this way the results are not the entire table's data, only the rows that match 'subscriber_id'.
I know how to pass a variable from Flex into php and the code on the php side to pick it up would look like this:
$subscriberID = $_POST['subscriberID'];
Can anyone shed light as to the proper code modification in "findAll" which will take my $subscriberID variable and compare it to the 'subscriber_id' field and then only return those rows that match? I think it has something to do with lines 98 to 101.
Any help is appreciated.
<?php
require_once(dirname(__FILE__) . "/2257safeDBconn.php");
require_once(dirname(__FILE__) . "/functions.inc.php");
require_once(dirname(__FILE__) . "/XmlSerializer.class.php");
* This is the main PHP file that process the HTTP parameters,
* performs the basic db operations (FIND, INSERT, UPDATE, DELETE)
* and then serialize the response in an XML format.
* XmlSerializer uses a PEAR xml parser to generate an xml response.
* this takes a php array and generates an xml according to the following rules:
* - the root tag name is called "response"
* - if the current value is a hash, generate a tagname with the key value, recurse inside
* - if the current value is an array, generated tags with the default value "row"
* for example, we have the following array:
* $arr = array(
* "data" => array(
* array("id_pol" => 1, "name_pol" => "name 1"),
* array("id_pol" => 2, "name_pol" => "name 2")
* "metadata" => array(
* "pageNum" => 1,
* "totalRows" => 345
* we will get an xml of the following form
* <?xml version="1.0" encoding="ISO-8859-1"?>
* <response>
* <data>
* <row>
* <id_pol>1</id_pol>
* <name_pol>name 1</name_pol>
* </row>
* <row>
* <id_pol>2</id_pol>
* <name_pol>name 2</name_pol>
* </row>
* </data>
* <metadata>
* <totalRows>345</totalRows>
* <pageNum>1</pageNum>
* </metadata>
* </response>
* Please notice that the generated server side code does not have any
* specific authentication mechanism in place.
* The filter field. This is the only field that we will do filtering after.
$filter_field = "subscriber_id";
* we need to escape the value, so we need to know what it is
* possible values: text, long, int, double, date, defined
$filter_type = "text";
* constructs and executes a sql select query against the selected database
* can take the following parameters:
* $_REQUEST["orderField"] - the field by which we do the ordering. MUST appear inside $fields.
* $_REQUEST["orderValue"] - ASC or DESC. If neither, the default value is ASC
* $_REQUEST["filter"] - the filter value
* $_REQUEST["pageNum"] - the page index
* $_REQUEST["pageSize"] - the page size (number of rows to return)
* if neither pageNum and pageSize appear, we do a full select, no limit
* returns : an array of the form
* array (
* data => array(
* array('field1' => "value1", "field2" => "value2")
* metadata => array(
* "pageNum" => page_index,
* "totalRows" => number_of_rows
function findAll() {
global $conn, $filter_field, $filter_type;
* the list of fields in the table. We need this to check that the sent value for the ordering is indeed correct.
$fields = array('id','subscriber_id','lastName','firstName','birthdate','gender');
$where = "";
if (@$_REQUEST['filter'] != "") {
$where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);
$order = "";
if (@$_REQUEST["orderField"] != "" && in_array(@$_REQUEST["orderField"], $fields)) {
$order = "ORDER BY " . @$_REQUEST["orderField"] . " " . (in_array(@$_REQUEST["orderDirection"], array("ASC", "DESC")) ? @$_REQUEST["orderDirection"] : "ASC");
//calculate the number of rows in this table
$rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
$row_rscount = mysql_fetch_assoc($rscount);
$totalrows = (int) $row_rscount["cnt"];
//get the page number, and the page size
$pageNum = (int)@$_REQUEST["pageNum"];
$pageSize = (int)@$_REQUEST["pageSize"];
//calculate the start row for the limit clause
$start = $pageNum * $pageSize;
//construct the query, using the where and order condition
$query_recordset = "SELECT id,subscriber_id,lastName,firstName,birthdate,gender FROM `modelName` $where $order";
//if we use pagination, add the limit clause
if ($pageNum >= 0 && $pageSize > 0) {
$query_recordset = sprintf("%s LIMIT %d, %d", $query_recordset, $start, $pageSize);
$recordset = mysql_query($query_recordset, $conn);
//if we have rows in the table, loop through them and fill the array
$toret = array();
while ($row_recordset = mysql_fetch_assoc($recordset)) {
array_push($toret, $row_recordset);
//create the standard response structure
$toret = array(
"data" => $toret,
"metadata" => array (
"totalRows" => $totalrows,
"pageNum" => $pageNum
return $toret;
* constructs and executes a sql count query against the selected database
* can take the following parameters:
* $_REQUEST["filter"] - the filter value
* returns : an array of the form
* array (
* data => number_of_rows,
* metadata => array()
function rowCount() {
global $conn, $filter_field, $filter_type;
$where = "";
if (@$_REQUEST['filter'] != "") {
$where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);
//calculate the number of rows in this table
$rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
$row_rscount = mysql_fetch_assoc($rscount);
$totalrows = (int) $row_rscount["cnt"];
//create the standard response structure
$toret = array(
"data" => $totalrows,
"metadata" => array()
return $toret;Hi there,
I have a php script that accesses a mySQL database and it was generated out of the Flex Builder wizard automatically. The script works great and there are no problems with it. It allows me to perform CRUD on a table by calling it from my Flex app. and it retrieves all the data and puts it into a nice MXML format.
My question, currently when I call "findAll" to retrieve all the data in the table, well, it retrieves ALL the rows in the table. That's fine, but my table is starting to grow really large with thousands of rows.
I want to modify this script so that I can pass a variable into it from Flex so that it only retrieves the rows that match the "$subscriber_id" variable that I pass. In this way the results are not the entire table's data, only the rows that match 'subscriber_id'.
I know how to pass a variable from Flex into php and the code on the php side to pick it up would look like this:
$subscriberID = $_POST['subscriberID'];
Can anyone shed light as to the proper code modification in "findAll" which will take my $subscriberID variable and compare it to the 'subscriber_id' field and then only return those rows that match? I think it has something to do with lines 98 to 101.
Any help is appreciated.
<?php
require_once(dirname(__FILE__) . "/2257safeDBconn.php");
require_once(dirname(__FILE__) . "/functions.inc.php");
require_once(dirname(__FILE__) . "/XmlSerializer.class.php");
* This is the main PHP file that process the HTTP parameters,
* performs the basic db operations (FIND, INSERT, UPDATE, DELETE)
* and then serialize the response in an XML format.
* XmlSerializer uses a PEAR xml parser to generate an xml response.
* this takes a php array and generates an xml according to the following rules:
* - the root tag name is called "response"
* - if the current value is a hash, generate a tagname with the key value, recurse inside
* - if the current value is an array, generated tags with the default value "row"
* for example, we have the following array:
* $arr = array(
* "data" => array(
* array("id_pol" => 1, "name_pol" => "name 1"),
* array("id_pol" => 2, "name_pol" => "name 2")
* "metadata" => array(
* "pageNum" => 1,
* "totalRows" => 345
* we will get an xml of the following form
* <?xml version="1.0" encoding="ISO-8859-1"?>
* <response>
* <data>
* <row>
* <id_pol>1</id_pol>
* <name_pol>name 1</name_pol>
* </row>
* <row>
* <id_pol>2</id_pol>
* <name_pol>name 2</name_pol>
* </row>
* </data>
* <metadata>
* <totalRows>345</totalRows>
* <pageNum>1</pageNum>
* </metadata>
* </response>
* Please notice that the generated server side code does not have any
* specific authentication mechanism in place.
* The filter field. This is the only field that we will do filtering after.
$filter_field = "subscriber_id";
* we need to escape the value, so we need to know what it is
* possible values: text, long, int, double, date, defined
$filter_type = "text";
* constructs and executes a sql select query against the selected database
* can take the following parameters:
* $_REQUEST["orderField"] - the field by which we do the ordering. MUST appear inside $fields.
* $_REQUEST["orderValue"] - ASC or DESC. If neither, the default value is ASC
* $_REQUEST["filter"] - the filter value
* $_REQUEST["pageNum"] - the page index
* $_REQUEST["pageSize"] - the page size (number of rows to return)
* if neither pageNum and pageSize appear, we do a full select, no limit
* returns : an array of the form
* array (
* data => array(
* array('field1' => "value1", "field2" => "value2")
* metadata => array(
* "pageNum" => page_index,
* "totalRows" => number_of_rows
function findAll() {
global $conn, $filter_field, $filter_type;
* the list of fields in the table. We need this to check that the sent value for the ordering is indeed correct.
$fields = array('id','subscriber_id','lastName','firstName','birthdate','gender');
$where = "";
if (@$_REQUEST['filter'] != "") {
$where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);
$order = "";
if (@$_REQUEST["orderField"] != "" && in_array(@$_REQUEST["orderField"], $fields)) {
$order = "ORDER BY " . @$_REQUEST["orderField"] . " " . (in_array(@$_REQUEST["orderDirection"], array("ASC", "DESC")) ? @$_REQUEST["orderDirection"] : "ASC");
//calculate the number of rows in this table
$rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
$row_rscount = mysql_fetch_assoc($rscount);
$totalrows = (int) $row_rscount["cnt"];
//get the page number, and the page size
$pageNum = (int)@$_REQUEST["pageNum"];
$pageSize = (int)@$_REQUEST["pageSize"];
//calculate the start row for the limit clause
$start = $pageNum * $pageSize;
//construct the query, using the where and order condition
$query_recordset = "SELECT id,subscriber_id,lastName,firstName,birthdate,gender FROM `modelName` $where $order";
//if we use pagination, add the limit clause
if ($pageNum >= 0 && $pageSize > 0) {
$query_recordset = sprintf("%s LIMIT %d, %d", $query_recordset, $start, $pageSize);
$recordset = mysql_query($query_recordset, $conn);
//if we have rows in the table, loop through them and fill the array
$toret = array();
while ($row_recordset = mysql_fetch_assoc($recordset)) {
array_push($toret, $row_recordset);
//create the standard response structure
$toret = array(
"data" => $toret,
"metadata" => array (
"totalRows" => $totalrows,
"pageNum" => $pageNum
return $toret;
* constructs and executes a sql count query against the selected database
* can take the following parameters:
* $_REQUEST["filter"] - the filter value
* returns : an array of the form
* array (
* data => number_of_rows,
* metadata => array()
function rowCount() {
global $conn, $filter_field, $filter_type;
$where = "";
if (@$_REQUEST['filter'] != "") {
$where = "WHERE " . $filter_field . " LIKE " . GetSQLValueStringForSelect(@$_REQUEST["filter"], $filter_type);
//calculate the number of rows in this table
$rscount = mysql_query("SELECT count(*) AS cnt FROM `modelName` $where");
$row_rscount = mysql_fetch_assoc($rscount);
$totalrows = (int) $row_rscount["cnt"];
//create the standard response structure
$toret = array(
"data" => $totalrows,
"metadata" => array()
return $toret; -
SQL Server 2012 Undetected Deadlock in a table with only one row
We have migrated our SQL 2000 Enterprise Database to SQL 2012 Enterprise few days ago.
This is our main database, so most of the applications access it.
The day after the migration, when users started to run tasks, the database access started to experiment a total failure.
That is, all processes in the SQL 2k12 database were in lock with each other. This is a commom case of deadlock, but the Database Engine was unable to detect it.
After some research, we found that the applications were trying to access a very simple table with only one row. This table has a number that is restarted every day and is used to number all the transactions made against the system. So, client
applications start a new transaction, get the current number, increment it by one and commit the transaction.
The only solution we found was to kill all user processes in SQL Server every time this situation occurs (no more than 5 minutes when all clients are accessing the database).
No client application was changed in this migration and this process was working very well for the last 10 years.
The problem is that SQL 2k12 is unable to handle this situation compared to SQL 2k.
It seems to occurs with other tables too, but as this is an "entry table" the problem occurs with it first.
I have searched internet and some suggest some workarounds like using table hints to completely lock the table at the begining of the transaction, but it can't be used to other tables.
Does anyone have heard this to be a problem with SQL 2k12? Is there any fixes to make SQL 2k12 as good as SQL 2k?First off re: "Unfortunatelly, this can't be used in production environment as exclusive table lock would serialize the accesses to tables and there will be other tables that will suffer with this problem."
This is incorrect.
Using a table to generate sequence numbers like this is a bad idea exactly because the access must be serialized. Since you can't switch to a SEQUENCE object, which is the correct solution, the _entire goal_ of this exercise to find a way to properly
serialize access to this table. Using exclusive locking will not be necessary for all the tables; just for the single-row table used for generating sequence values with a cursor.
I converted the sample program to VB.NET:
Public Class Form1
Private mbCancel As Boolean = False
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim soConn As ADODB.Connection
Dim soRst As ADODB.Recordset
Dim sdData As Date
Dim slValue As Long
Dim slDelay As Long
'create database vbtest
'go
' CREATE TABLE [dbo].[ControlNumTest](
' [UltData] [datetime] NOT NULL,
' [UltNum] [int] NOT NULL,
' CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
' [UltData] Asc
' )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
' ) ON [PRIMARY]
mbCancel = False
Do
' Configure the Connection object
soConn = New ADODB.Connection
With soConn
.ConnectionString = "Provider=SQLNCLI11;Initial Catalog=vbtest;Data Source=localhost;trusted_connection=yes"
.IsolationLevel = ADODB.IsolationLevelEnum.adXactCursorStability
.Mode = ADODB.ConnectModeEnum.adModeReadWrite
.CursorLocation = ADODB.CursorLocationEnum.adUseServer
.Open()
End With
' Start a new transaction
Call soConn.BeginTrans()
' Configure the RecordSet object
soRst = New ADODB.Recordset
With soRst
.ActiveConnection = soConn
.CursorLocation = ADODB.CursorLocationEnum.adUseServer
.CursorType = ADODB.CursorTypeEnum.adOpenForwardOnly
.LockType = ADODB.LockTypeEnum.adLockPessimistic
.Open("SELECT * FROM dbo.ControlNumTest")
End With
With soRst
sdData = .Fields!UltData.Value ' Read the last Date (LOCK INFO 1: See comments bello
slValue = .Fields!UltNum.Value ' Read the last Number
If sdData <> Date.Now.Date Then ' Date has changed?
sdData = Date.Now.Date
slValue = 1 ' Restart number
End If
.Fields!UltData.Value = sdData ' Update data
.Fields!UltNum.Value = slValue + 1 ' Next number
End With
Call soRst.Update()
Call soRst.Close()
' Ends the transaction
Call soConn.CommitTrans()
Call soConn.Close()
soRst = Nothing
soConn = Nothing
txtUltNum.Text = slValue + 1 ' Display the last number
Application.DoEvents()
slDelay = Int(((Rnd * 250) + 100) / 100) * 100
System.Threading.Thread.Sleep(slDelay)
Loop While mbCancel = False
If mbCancel = True Then
Call MsgBox("The test was canceled")
End If
Exit Sub
End Sub
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
mbCancel = True
End Sub
End Class
And created the table
CREATE TABLE [dbo].[ControlNumTest](
[UltData] [datetime] NOT NULL,
[UltNum] [int] NOT NULL,
CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
[UltData] Asc
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = on, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]
go insert into ControlNumTest values (cast(getdate()as date),1)
Then ran 3 copies of the program and generated the deadlock:
<deadlock>
<victim-list>
<victimProcess id="processf27b1498" />
</victim-list>
<process-list>
<process id="processf27b1498" taskpriority="0" logused="0" waitresource="KEY: 35:72057594039042048 (a01df6b954ad)" waittime="1970" ownerId="3181" transactionname="implicit_transaction" lasttranstarted="2014-02-14T15:49:31.263" XDES="0xf04da3a8" lockMode="X" schedulerid="4" kpid="9700" status="suspended" spid="51" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2014-02-14T15:49:31.267" lastbatchcompleted="2014-02-14T15:49:31.267" lastattention="1900-01-01T00:00:00.267" clientapp="vbt" hostname="DBROWNE2" hostpid="21152" loginname="NORTHAMERICA\dbrowne" isolationlevel="read committed (2)" xactid="3181" currentdb="35" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" stmtstart="80" sqlhandle="0x020000008376181f3ad0ea908fe9d8593f2e3ced9882f5c90000000000000000000000000000000000000000">
UPDATE [dbo].[ControlNumTest] SET [UltData]=@Param000004,[UltNum]=@Param000005 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
(@Param000004 datetime,@Param000005 int)UPDATE [dbo].[ControlNumTest] SET [UltData]=@Param000004,[UltNum]=@Param000005 </inputbuf>
</process>
<process id="processf6ac9498" taskpriority="0" logused="10000" waitresource="KEY: 35:72057594039042048 (a01df6b954ad)" waittime="1971" schedulerid="5" kpid="30516" status="suspended" spid="55" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2014-02-14T15:49:31.267" lastbatchcompleted="2014-02-14T15:49:31.267" lastattention="1900-01-01T00:00:00.267" clientapp="vbt" hostname="DBROWNE2" hostpid="27852" loginname="NORTHAMERICA\dbrowne" isolationlevel="read committed (2)" xactid="3182" currentdb="35" lockTimeout="4294967295" clientoption1="671156256" clientoption2="128058">
<executionStack>
<frame procname="adhoc" line="1" sqlhandle="0x020000003c6309232ab0edbe0a7790a816a09c4c5ac6f43c0000000000000000000000000000000000000000">
FETCH API_CURSOR0000000000000001 </frame>
<frame procname="unknown" line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
FETCH API_CURSOR0000000000000001 </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057594039042048" dbid="35" objectname="vbtest.dbo.ControlNumTest" indexname="PK_CorreioNumTeste" id="lockff6e6c80" mode="U" associatedObjectId="72057594039042048">
<owner-list>
<owner id="processf6ac9498" mode="S" />
<owner id="processf6ac9498" mode="U" requestType="wait" />
</owner-list>
<waiter-list>
<waiter id="processf27b1498" mode="X" requestType="convert" />
</waiter-list>
</keylock>
<keylock hobtid="72057594039042048" dbid="35" objectname="vbtest.dbo.ControlNumTest" indexname="PK_CorreioNumTeste" id="lockff6e6c80" mode="U" associatedObjectId="72057594039042048">
<owner-list>
<owner id="processf27b1498" mode="U" />
<owner id="processf27b1498" mode="U" />
<owner id="processf27b1498" mode="X" requestType="convert" />
</owner-list>
<waiter-list>
<waiter id="processf6ac9498" mode="U" requestType="wait" />
</waiter-list>
</keylock>
</resource-list>
</deadlock>
It's the S lock that comes from the cursor read that's the villian here. U locks are compatible with S locks, so one session gets a U lock and another gets an S lock. But then the session with an S needs a U, and the session with a U needs an
X. Deadlock.
I'm not sure what kind of locks were taken by this cursor code on SQL 2000, but on SQL 2012, this code is absolutely broken and should deadlock.
The right way to fix this code is to add (UPDLOCK,SERIALIZABLE) to the cursor
.Open("SELECT * FROM dbo.ControlNumTest with (updlock,serializable)")
So each session reads the table with a restrictive lock, and you don't mix S, U and X locks in this transaction. This resolves the deadlock, but requires a code change.
I tried several things that didn't require a code, which did not resolve the deadlock;
1) setting ALLOW_ROW_LOCKS=OFF ALLOW_PAGE_LOCKS=OFF
2) SERIALIZABLE isolation level
3) Switching OleDB providers from SQLOLEDB to SQLNCLI11
Then I replaced the table with a view containing a lock hint:
CREATE TABLE [dbo].[ControlNumTest_t](
[UltData] [datetime] NOT NULL,
[UltNum] [int] NOT NULL,
CONSTRAINT [PK_CorreioNumTeste] PRIMARY KEY CLUSTERED
[UltData] Asc
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = on, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]
go
create view ControlNumTest as
select * from ControlNumTest_t with (tablockx)
Which, at least in my limited testing, resovlved the deadlock without any client code change.
David
David http://blogs.msdn.com/b/dbrowne/ -
One to many to return only one row
hi guys,
i have an urgent problem here. i have a sql statement like below;
SELECT ALM_SWAP2_REP.M_TP_RTMFRP0, ALM_SWAP2_REP.M_TP_RTMFRP1,
ALM_SWAP2_REP.M_TP_RTMFRF0, ALM_SWAP2_REP.M_TP_RTMFRF1,
To_Char(TRN_HDR_DBF.M_OPT_FLWFST,'YYYYMMDD') AS VALDATE, ALM_SWAP2_REP.M_TP_RTFXC02,
ALM_SWAP2_REP.M_TP_RTFXC12, TRN_HDR_DBF.M_BRW_NOMU1, TRN_HDR_DBF.M_BRW_NOMU2,
ALM_SWAP2_REP.M_TP_RTAMC02, ALM_SWAP2_REP.M_TP_RTAMC12,
(CASE WHEN PAY_FLOW_DBF.M_LEG = 0 AND '20100831' BETWEEN To_Char(PAY_FLOW_DBF.M_CALC_DATE0,'YYYYMMDD') AND To_Char(PAY_FLOW_DBF.M_CALC_DATE1,'YYYYMMDD') THEN To_Char(PAY_FLOW_DBF.M_CALC_DATE1,'YYYYMMDD') END) AS RCV_DATE,
(CASE WHEN PAY_FLOW_DBF.M_LEG = 1 AND '20100831' BETWEEN To_Char(PAY_FLOW_DBF.M_CALC_DATE0,'YYYYMMDD') AND To_Char(PAY_FLOW_DBF.M_CALC_DATE1,'YYYYMMDD') THEN To_Char(PAY_FLOW_DBF.M_CALC_DATE1,'YYYYMMDD') END) AS PAY_DATE
FROM ALM_SWAP2_REP, TRN_HDR_DBF, PAY_FLOW_DBF WHERE ALM_SWAP2_REP.M_REF_DATA = 456576
AND ALM_SWAP2_REP.M_NB = TRN_HDR_DBF.M_NB
AND ALM_SWAP2_REP.M_NB = PAY_FLOW_DBF.M_TRN_ID
AND ALM_SWAP2_REP.M_NB = 228128
AND '20100831' BETWEEN To_Char(PAY_FLOW_DBF.M_CALC_DATE0,'YYYYMMDD') AND To_Char(PAY_FLOW_DBF.M_CALC_DATE1,'YYYYMMDD')
ORDER BY ALM_SWAP2_REP.M_TRN_GRP, ALM_SWAP2_REP.M_NB
When I join few tables, the results are returned in two rows because I have two records that in table PAY_FLOW_DBF that matches one row in table ALM_SWAP2_REP. I need these two matches but I want it to be returned in only one row without using group by. Pls help me. Thanklsuser9274041 wrote:
i have an urgent problem here.http://www.oracle.com/html/terms.html
>
4. Use of Community Services
Community Services are provided as a convenience to users and Oracle is not obligated to provide any technical support for, or participate in, Community Services. While Community Services may include information regarding Oracle products and services, including information from Oracle employees, they are not an official customer support channel for Oracle.
You may use Community Services subject to the following: (a) Community Services may be used solely for your personal, informational, noncommercial purposes; (b) Content provided on or through Community Services may not be redistributed; and (c) personal data about other users may not be stored or collected except where expressly authorized by Oracle
>
Could you explain how something that is for your personal, informational, noncommercial purposes can be urgent?
Or is this a violation of the terms of use and abuse of these forums? -
Selecting only one row at a time
Hi experts,
i have following doubt regarding selecting rows from a db:
Is there any way of selecting only one row AT A TIME from a dabase just to collect the data in rows instead of in a unique document containing all the rows?
I would like you to ellaborate on this as i need to send only one row to the IE, and then other row, and so on... without throwing any error!
I have seen that there are SELECT SINGLE and SELECT UP TO 1 ROW, but these two methods are only useful when retrieving only one row, and that does not match my requirements. I need to process all the rows but one by one..
I know that we can use the receiver jdbc adapter as if it was a sender by means of its specific datatype, but how to do it row by row??
Hope i had explained well..
Thanks in advance and best regards,
DavidHi kiran,
Yes, my table has 5 not null fields but i am selecting and updating fixes values so i think that I will definetely go for the next solution:
SELECT * FROM t1 WHERE status='0' and ROWNUM<2;
UPDATE t1 SET status='1' WHERE status='0' and ROWNUM<2;
My only concern is if the update will take the same row that the select.... BTW, I think it will
..What do you guys think?
I ve been trying to operate with your proposed queries but i received some errors. Your queries are very interesting but i think that with the above ones i meet my requirements as the status field will be 0 for not processed rows and 1 for precessed ones (and the update will look for the row that meets the same 'where' clause than the select, and then, and only then, it will set status='1').
The only thing i have to care about is what i questioned before.
Thanks a lot and best regards,
David -
Way to rollback only one row of a view object
Is there a way to "rollback" only one row of a view object? That is, two or more rows have been modified but we wish to only restore one of the rows to the original (database) values.
Is there a way to retain all of the current rows in all of the view objects for an application module after issuing a jbo:Rollback?
ThanksIs there a way to "rollback" only one row of a view object? That is, two or more rows have been modified but we wish to only restore one of the rows to the original (database) values.In jdev903, a new method is being added to Row interface to reset the row state to the transaction-original state i.e., rollback the row to it's original values.
Is there a way to retain all of the current rows in all of the view objects for an application module after issuing a jbo:Rollback?No. You may however override before/afterRollback methods on the ViewObjectImpl subclasses to cache the current row key in beforeRollback and restore currency on the default iterator to that rowkey in afterRollback
Thanks -
Extract report only 15 rows at a time
Hi,
I need to generate report when i export it to a spear sheet it must allow to take only 15 rows at a time how to do this please suggest.
Thanks
Sudhir.Having the maxbufferrowsize
But I have a doubt that it is related. You probably max up the allowed size for your version of Excel.
Arthur My Blog -
How I show only 20 rows per pages in a rtf report in BI Publisher 11g
I'm making a report and i want show only 20 rows per pages in a formatt RTF. I get this with anything as a xls, xslt....
I'm a new user....please . Any idea..???
Thank for all.
Edited by: 844565 on Mar 15, 2011 7:34 AMInstead of doing that take the url CURRENT_SERVER_URL(pre-definde BI Publsiher variable) by declaring like below
<?param@begin:CURRENT_SERVER_URL?>
Then subsequently add the extra patameters eg. region like below
{$CURRENT_SERVER_URL}Data/Country.xdo&p_region={REGION} <- get this REGION value from XML
Cheers,
ND
Use the "helpful" or "correct" buttons to award points to replies. -
Garageband is not picking sound from builtin mic (rMBP).
I recorded some tracks, now it is not working. I have checked GB Preferences, Audio/MIDI > Audio Input > Built In Microphone.
All other apps can use the mic. Dictation, Skype etc. can use the mic!
Monitoring is switched ON at GB, it picks sound only some other app, eg. Skype, is used to access the mic.
Can any one help me, Please!
Many thanks!I have foud the solution - rather stupid. The system setting for audio input was at 70%, increasing it to 100% solves the problem!
-
Sender file adapter to pick up only 2 files at a time
Hi ,
I need to configure my sender file adapter in a way so that it picks up only 2 files at a time even though at FTP more files are placed.
Mudit MehraHi,
As said above there is no such option to pick only 2 files at a time, however if possible you can create files in the sender/source directory in a manner that only two files are created at a time.
Thanks! -
Picking a Max value row out of a group of rows.
Hi,
I'm using Oracle 10.2.0.4.0
For some reason I can't come up with a way to pick out a simple row that contains the max date value out of a group of sub group of rows. The following rows are one group of the result of a complex view ordered by ACCOUNT_NUMBER. I'm just showing the first group for Demo Purposes.
CUSTOMER_NAME ACCOUNT_NUMBER BOOKED_DATES OUTSTANDING_APPROVALS BOOKED_NOT_BILLED SALES
ABC company, LLC 114943 05/22/2008 11:17:05 100,072.43 100,072.43
ABC company, LLC 114943 06/30/2008 15:12:29 129,956.00 129,956.00
ABC company, LLC 114943 07/30/2008 15:57:16 10,957.00 10,957.00This is just the first of many groups in this view. I just need a simple way to select the row with the max BOOKED_DATES. I've tried everything I could think of but the other two rows are not going away. MAX(BOOKED_DATES) is not working in the HAVING section. I just want my out output to be the rows out of each group with the most recent BOOKED_DATES.
Therefor , my output would be
CUSTOMER_NAME ACCOUNT_NUMBER BOOKED_DATES OUTSTANDING_APPROVALS BOOKED_NOT_BILLED SALES
ABC company, LLC 114943 07/30/2008 15:57:16 10,957.00 10,957.00for ACCOUNT_NUMBER 114943. For the truly curious, the query is below. I'm sure the solution is simple but not to me this day. Maybe it's a Monday thing.
Thanks in Advance.
select distinct
party.party_name CUSTOMER_NAME, --"Customer Name"
cust_acct.account_number ACCOUNT_NUMBER,--"Account Number"
max(h.BOOKED_DATE) BOOKED_DATES,-- "Booked Dates",
osa.OUTSTANDING_SALE_AMT OUTSTANDING_APPROVALS,--"Outstanding Approvals",
ola2.BOOKED_NOT_BILLED BOOKED_NOT_BILLED,
--ola.line_id,
--h.header_id,
sum(nvl(ola.ORDERED_QUANTITY,0) * nvl(ola.UNIT_LIST_PRICE,0)) SALES,
CASE
WHEN
invoiced_amt_info.TERMS = 'Current'
THEN invoiced_amt_info.CURRENT_INV
ELSE NULL
END "CURRENT_IA",--"Current",
CASE
WHEN
invoiced_amt_info.TERMS = 'Current'
THEN invoiced_amt_info.CURRENT_TAX
ELSE NULL
END CURRENT_TAX,--"Current Tax",
CASE
WHEN
invoiced_amt_info.TERMS = '1-30 days'
THEN invoiced_amt_info.CURRENT_INV
ELSE NULL
END LT_30_DAYS,-- "1-30 Days",
CASE
WHEN
invoiced_amt_info.TERMS = '1-30 days'
THEN invoiced_amt_info.CURRENT_TAX
ELSE NULL
END LT_30_DAYS_TAX,-- "1-30 Days Tax",
CASE
WHEN
invoiced_amt_info.TERMS = '31-60 days'
THEN invoiced_amt_info.CURRENT_INV
ELSE NULL
END LT_60_DAYS,-- "1-60 Days",
CASE
WHEN
invoiced_amt_info.TERMS = '31-60 days'
THEN invoiced_amt_info.CURRENT_TAX
ELSE NULL
END LT_60_DAYS_TAX,--"1-60 Days Tax",
CASE
WHEN
invoiced_amt_info.TERMS = '61-90 days'
THEN invoiced_amt_info.CURRENT_INV
ELSE NULL
END LT_90_DAYS,-- "1-90 Days",
CASE
WHEN
invoiced_amt_info.TERMS = '61-90 days'
THEN invoiced_amt_info.CURRENT_TAX
ELSE NULL
END LT_90_DAYS_TAX,-- "1-90 Days Tax",
CASE
WHEN
invoiced_amt_info.TERMS = '90+ days'
THEN invoiced_amt_info.CURRENT_INV
ELSE NULL
END MT_90_PLUS_DAYS,-- "90+ Days",
CASE
WHEN
invoiced_amt_info.TERMS = '90+ days'
THEN invoiced_amt_info.CURRENT_TAX
ELSE NULL
END MT_90_PLUS_DAYS_TAX,--"90+ Days Tax",
uc.UNAPPLIED_CASH UNAPPLIED_CASH--"Unapplied Cash"
FROM
oe_order_headers_all h,
hz_cust_accounts cust_acct,
hz_parties party,
hz_customer_profiles cust_prof,
oe_order_lines_all ola,
select l.HEADER_ID HEADER_ID,
l.sold_to_org_id SOLD_TO_ORG_ID,
sum(nvl(l.ORDERED_QUANTITY,0) * nvl(l.UNIT_LIST_PRICE,0)) BOOKED_NOT_BILLED
from
oe_order_lines_all l
where
l.BOOKED_FLAG <> 'N'
AND l.FLOW_STATUS_CODE <> 'CANCELLED'
AND l.INVOICE_INTERFACE_STATUS_CODE <> 'NO'
group by l.HEADER_ID, l.sold_to_org_id
) ola2,
select INV_AMT.aginglayer, INV_AMT.aging TERMS, sum(INV_AMT.due_amount) CURRENT_INV, INV_AMT.CUSTOMER_ID,--due_amount,--invoiced ammount Currrent
sum(INV_AMT.tax_amount) CURRENT_TAX --tax_amount
from (
select
c.customer_name
, c.customer_number
, c.CUSTOMER_ID
, sum(ps.amount_due_remaining) due_amount
, sum(ps.tax_remaining) tax_amount
, 'Current' aging
, 1 aginglayer
, 1 showord
from ra_customers c
, ar_payment_schedules_all ps
where ps.status = 'OP'
and ps.class <> 'PMT'
and trunc(sysdate - ps.due_date) < 1
and ps.customer_id = c.customer_id
group by c.customer_name
, c.customer_number
, c.CUSTOMER_ID
union
select
c.customer_name
, c.customer_number
, c.CUSTOMER_ID
, sum(ps.amount_due_remaining) due_amount
, sum(ps.tax_remaining) tax_amount
, '1-30 days' aging
, 2 aginglayer
, 2 showord
from ra_customers c
, ar_payment_schedules_all ps
where ps.status = 'OP'
and ps.class <> 'PMT'
and trunc(sysdate - ps.due_date) >= 1
and trunc(sysdate - ps.due_date) <= 30
and ps.customer_id = c.customer_id
group by c.customer_name
, c.customer_number
, c.CUSTOMER_ID
union
select
c.customer_name
, c.customer_number
, c.CUSTOMER_ID
, sum(ps.amount_due_remaining) due_amount
, sum(ps.tax_remaining) tax_amount
, '31-60 days' aging
, 3 aginglayer
, 3 showord
from ra_customers c
, ar_payment_schedules_all ps
where ps.status = 'OP'
and ps.class <> 'PMT'
and trunc(sysdate - ps.due_date) > 30
and trunc(sysdate - ps.due_date) <= 60
and ps.customer_id = c.customer_id
group by c.customer_name
, c.customer_number
, c.CUSTOMER_ID
union
select
c.customer_name
, c.customer_number
, c.CUSTOMER_ID
, sum(ps.amount_due_remaining) due_amount
, sum(ps.tax_remaining) tax_amount
, '61-90 days' aging
, 4 aginglayer
, 4 showord
from ra_customers c
, ar_payment_schedules_all ps
where ps.status = 'OP'
and ps.class <> 'PMT'
and trunc(sysdate - ps.due_date) > 60
and trunc(sysdate - ps.due_date) <= 90
and ps.customer_id = c.customer_id
group by c.customer_name
, c.customer_number
, c.CUSTOMER_ID
union
select
c.customer_name
, c.customer_number
, c.CUSTOMER_ID
, sum(ps.amount_due_remaining) due_amount
, sum(ps.tax_remaining) tax_amount
, '90+ days' aging
, 5 aginglayer
, 5 showord
from ra_customers c
, ar_payment_schedules_all ps
, ra_customer_trx_all trx
, ra_cust_trx_types_all types
where ps.status = 'OP'
and ps.class <> 'PMT'
and trunc(sysdate - ps.due_date) > 90
and ps.customer_id = c.customer_id
and trx.customer_trx_id = ps.customer_trx_id
and types.cust_trx_type_id = trx.cust_trx_type_id
and types.name <> 'CSG-Conversion Pmt'
and types.org_id= 1
group by c.customer_name
, c.customer_number
, c.CUSTOMER_ID
) INV_AMT
group by aginglayer, aging, showord, INV_AMT.CUSTOMER_ID
) invoiced_amt_info,
select ra_cust.customer_name CUSTOMER_NAME, ra_cust.customer_number CUSTOMER_NUMBER, ra_cust.customer_id CUSTOMER_ID,
sum(pay_sched.amount_due_remaining) UNAPPLIED_CASH
from ra_customers ra_cust
, ar_payment_schedules_all pay_sched
where
pay_sched.status = 'OP'
and pay_sched.class = 'PMT'
and pay_sched.due_date > trunc(sysdate - 365)
and pay_sched.customer_id = ra_cust.customer_id
group by ra_cust.customer_name, ra_cust.CUSTOMER_NUMBER, ra_cust.CUSTOMER_ID
) uc,
select qh.cust_account_id CUST_ACCOUNT_ID, sum(qh.total_quote_price) OUTSTANDING_SALE_AMT
from ASO_QUOTE_HEADERS_ALL qh,ASO_QUOTE_STATUSES_TL st
where st.quote_status_id = qh.quote_status_id
and st.meaning ='Credit Hold'
group by qh.cust_account_id
) osa
Where
h.HEADER_ID = ola.HEADER_ID
AND h.HEADER_ID = ola2.HEADER_ID
AND ola.sold_to_org_id = cust_acct.cust_account_id(+)
AND ola2.sold_to_org_id = ola.sold_to_org_id(+)
AND cust_acct.party_id = party.party_id(+)
AND cust_acct.CUST_ACCOUNT_ID = cust_prof.CUST_ACCOUNT_ID(+)
AND cust_prof.party_id = party.party_id
AND cust_prof.CUST_ACCOUNT_ID = invoiced_amt_info.CUSTOMER_ID(+)
AND cust_prof.CUST_ACCOUNT_ID = uc.CUSTOMER_ID(+)
AND cust_prof.CUST_ACCOUNT_ID = osa.CUST_ACCOUNT_ID(+)
group by party.party_name, cust_acct.account_number, invoiced_amt_info.TERMS, osa.OUTSTANDING_SALE_AMT,
ola2.BOOKED_NOT_BILLED,
invoiced_amt_info.CURRENT_INV,
invoiced_amt_info.CURRENT_TAX, uc.UNAPPLIED_CASH
order by party.party_nameExample
--Sample Data
SQL>select deptno, empno, sal,
2 max(sal) over ( partition by deptno order by deptno) mv
3* from emp
SQL> /
DEPTNO EMPNO SAL MV
10 7782 2450 5000
10 7839 5000 5000
10 7934 1300 5000
20 7566 2975 3000
20 7902 3000 3000
20 7876 1100 3000
20 7369 800 3000
20 7788 3000 3000
30 7521 1250 2850
30 7844 1500 2850
30 7499 1600 2850
30 7900 950 2850
30 7698 2850 2850
30 7654 1250 2850
14 rows selected.
SQL>select * from
2 (
3 select deptno, empno, sal,
4 max(sal) over ( partition by deptno order by deptno) mv
5 from emp
6* ) where sal = mv
SQL> /
DEPTNO EMPNO SAL MV
10 7839 5000 5000
20 7902 3000 3000
20 7788 3000 3000
30 7698 2850 2850SS -
I'm using the xmla to add a custom dsv to doing processadd, in my testing, only process add 1 row to a big dimension(already has more than 100minion rows) will cost more than 10 mins. bellow is the test enviorement:
DimensionTable: TestDimension with only one column PKey, which is the identity key, type is int.
Test machine: Vitual Machine, Windows Server 2012+SSAS 2012. the machine has 64G Memory and 8 cores CPU.
Does anyone can tell me why processadd only 1 row cost so much time? what does the SSAS do when processadd a dimension. i want internal explanations. Using profiler tool, i found many read data, is this a bug with SSAS? does it need read all the dimension
data to rebuild the dimension structure or level or index?Hi Hong,
According to your description, you want to know why processadd only 1 row cost 10 minutes, right? ProcessAdd sound like a useful option (faster than ProcessUpdate, less impact than ProcessFull etc) but when you try and use it you will find it is not as straightforward
as the others. Here is a blog which explain the ProcessAdd option deeply, please refer to the link below.
SSAS - ProcessAdd explained
Regards,
Charlie Liao
TechNet Community Support -
Dynamic Table in PDF - only first row passed to the WD Java
Hi Experts,
I'm working with Web Dynpro for Java on WAS 2004s SP13, ADS for SP13 and LiveCycle Designer 7.1
I am facing a problem related to PDF-dynamic table generation.
I am creating the PDF form with a dynamic table, an empty row will be added, when ADD button is clicked, the row will be deleted when DELETE button is clicked. After form submit, only first row of the table is passed to the Web Dynpro. I'v tried to use different dataSource Context node structure without results. The structure diescribed in the thread [Dynamic Table - same data repeating in all rows; doesnt works for me. The same happend if i try to folow the advise from Wiki https://wiki.sdn.sap.com/wiki/display/WDJava/Creating%20Table%20in%20Interacting%20form%20using%20Web%20Dynpro.
Beside this, my DropDown list in the table column is not populated. I know how to populate the DropDown list outside of table. That's working fine. But the DropDown in the table just not respond on the click (is not going open). I'm pretty sure that this is a result of a Context node structure/binding issue.
Please suggest me how can i implement dynamic table and populate the data in table dropdown column.
Edited by: A. Mustacevic on Sep 7, 2009 12:18 AMHi Prabhakar,
You describe exactly my situation. The node which is bound to the table row has cardinality 1..n. Exactly Context structure is:
node dataSource (cardinality 1..1/ Singleton true) ======> dataSource of the Interactive Form
subnode TableList (cardinality 1..1/ Singleton true) ======> bound to the table in the Interactive Form
subnode TableWrapper (cardinality 1..n/ Singleton true) ======> bound to the table row in the Interactive Form
subnode TableData (cardinality 0..1/ Singleton false) ======> table data
attribute 1 ====> Context nodeattribute bound to the table row field
attribute 2
This structure is recommanded in the post that I found on the Forum (see the firs hyperlink in my firs post).
Is this structure correct? Why is not working?
Your link is not working. Can you post the correct one.
Thanks in advance.
Regards
Adnan
Edited by: A. Mustacevic on Sep 8, 2009 1:56 PM
Edited by: A. Mustacevic on Sep 8, 2009 1:57 PM
Edited by: A. Mustacevic on Sep 8, 2009 2:00 PM
Edited by: A. Mustacevic on Sep 8, 2009 2:01 PM
Edited by: A. Mustacevic on Sep 8, 2009 2:02 PM -
Restricting the File adapter to pick up only a part of the payload
Hi ,
I have a csv file like below,
$H$,Header1, xyz, xyz
$H$,Header2, xyz, xyz
$H$,Header3, xyz, xyz
$D$,Detail1,xyz,xyz
$D$,Detail2,xyz,xyz
$D$,Detail3,xyz,xyz
Header and Details links have no link to each other!!! I just wanted to check if at all it is possible for the file adapter to skip the header llines and take only the detail lines.
Does this require any tweaking in the nxsd that is generated??
Any light on this would help... :)Hi Anuradha,
Could you please kindly provide answers to the queries below so that forum members can have a clear picture of your requirement
1. What is the operating system of your PI server?
2. What is the version of PI you are working on?
3. What exactly do you mean by statement " I want the adapter to pick only one file at a time rather than picking all the available files in the system." since adapter will actually pick up files one by one only in each polling interval. This goes in PI pipeline
one after another in quick succession.
4. Do you mean that you want to insert delays between successive file pick up then Baskar has already answered your question.
5. Do you want the channel to pick up only one file in a day out of all possible file?
6. Could you please kindly provide any sample file name you are receiving in PI ?
regards
Anupam -
Select event not triggered in table with only one row
Hi all,
I am building a BI VC application where query data is displayed in a table. When the user clicks on a table row another query is then triggered and output in a second table. The output from table 1 is linked to the input of query2/table2 with a select event.
The problem that I am facing is that if there is only one row in table 1, the select event is never triggered. If, however there are two or more rows in the table the select event is triggered and query 2 is executed. I have searched the forums but all I could find on select event problems was how to avoid the initial select event.
Has anyone else experienced this issue and what is the workaround or is this a bug in Visual Composer? We are on VC 7.0 SP19.
Cheers,
Astein MelandThanks Chittya,
Yes we have considered this option as well. But as we have more than one table linked together we would like to avoid having to manually click several buttons.
In the end I found Note 1364334 describing bugfixes released in VC 7.0 SP20:
"Normally, when a Visual Composer table is populated from a data service, the first row is selected by default. However, we have found that if only one data row is returned from the data service, this row is not selected by default and cannot be manually selected by clicking on it either."
So I think we will just have to upgrade our Portal to the latest support packs to solve this problem.
Thanks,
Astein
Maybe you are looking for
-
How to create the trace file using run_report_object at runtime
Dear All using : Oracel Application Server 10g Oracle Database 11g Windows XP/sp3 I'm using run_report_object to call a report inside the form. THis report is running OK from reports builder, however it's too slow when run from Application server. Ho
-
Importing sales order throgh DTW
hi, i have imported sales order through DTW.there was no error and it came as imported successfully.but 0 objects were imported.can any help me with regards priya
-
I bought a new Macair for travel. Also complete Nik software and LR4. When opening Lr4, it states Error changing modules?? Have tried uninstalling.additional suggestions?
-
Internal vs. external directory services best practices
Hello everyone, We have two distinct directory services here where I work, one that supports 'internal' needs, and one that is used for external clients, the people who use our web-facing applications. We are limited by the separation of the director
-
Hi Gurus, In D2k , How can I de-select the stacked canvas from content canvas ? I have tried to deselect same using Navigation path : D2k Form > Menu > View > Stacked View --- but not succeed. Any one can guide me in this regards Thanks in advance. S