Duplicate records on formula column
I have a dataset that I am trying to modify to remove the duplicates based on the previous record.
ID Name Number of Days
1 Bob 2
1 Jim 2
2 Harry 8
3 John 10
4 Mary 5
4 Sue 5
4 Billy 5
I want to show the second Number of Days record to zero if the previous ID is the same. How do I do that in regular SQL and not PL/SQL loop? What function(s) could I use? I am putting this in Oracle Reports and the dataset should look like this:
ID Name Number of Days
1 Bob 2
1 Jim 0
2 Harry 8
3 John 10
4 Mary 5
4 Sue 0
4 Billy 0
The actual table has many records and the real data
is taking the lowest ascending name. So if the
records are :
1 MARY 10
1 SUE 10
Then, it should print :
1 MARY 10
1 SUE 0how about this?
4 Mary 5
4 Sue 0
4 Billy 0
shouldn't be ?
4 Billy 5
4 Mary 0
4 Sue 0
then
SQL> with t as
2 (select 1 id, 'Bob' name, 2 num from dual union all
3 select 1, 'Jim', 2 from dual union all
4 select 2, 'Harry', 8 from dual union all
5 select 3, 'John', 10 from dual union all
6 select 4, 'Mary', 5 from dual union all
7 select 4, 'Sue', 5 from dual union all
8 select 4, 'Billy', 5 from dual)
9 select id,name,decode(id,prev_id,0,num) num
10 from
11 (select id,name,num,lag(id, 1, 0) OVER (partition by id ORDER BY id,name) prev_id
12 from t);
ID NAME NUM
1 Bob 2
1 Jim 0
2 Harry 8
3 John 10
4 Billy 5
4 Mary 0
4 Sue 0
7 rows selected.added solution
Message was edited by:
devmiral
Similar Messages
-
Duplicate records in a column of a table
Hi,
Can someone tell me how to get the duplicate records in a column of a table
what is the sql query.
Can anyone pls give an exampleCan someone tell me how to get the duplicate records in a column of a table select your_column, count(*)
from your_table
group by your_column
having count(*) > 1; -
Removing duplicates record form a column
I ran this query to populate a field with random numbers but it keeps populating with some duplicate records. Any idea how I can remove the duplicates?
UPDATE APRFIL
SET ALTATH = CONVERT(int, RAND(CHECKSUM(NEWID())) * 10000);Prashanth,
You are correct the update does create the non-dupes records, it just doesn't insert them into the ALTATH field. I verify by running the select altath from aprfil table and the results are not the same records display after the updates. I hope I am clear
enough and thanks for your efforts.
Can you give example of a case where it doesnt work? It may be that values in actual were not duplicates due to presence of some unprintable characters like space characters.
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
Delete duplicate records based on condition
Hi Friends,
I am scratching my head as how to select one record from a group of duplicate records based upon column condition.
Let's say I have a table with following data :
ID START_DATE END_DATE ITEM_ID MULT RETAIL | RETAIL / MULT
1 10/17/2008 1/1/2009 83 3 7 | 2.3333
2 10/17/2008 1/1/2009 83 2 4 | 2
3 10/17/2008 1/1/2009 83 2 4 | 2
4 10/31/2008 1/1/2009 89 3 6 | 2
5 10/31/2008 1/1/2009 89 4 10 | 2.5
6 10/31/2008 1/1/2009 89 4 10 | 2.5
7 10/31/2008 1/1/2009 89 6 6 | 1
8 10/17/2008 10/23/2008 124 3 6 | 2From the above records the rule to identify duplicates is based on START_DATE,+END_DATE+,+ITEM_ID+.
Hence the duplicate sets are {1,2,3} and {4,5,6,7}.
Now I want to keep one record from each duplicate set which has lowest value for retail/mult(retail divided by mult) and delete rest.
So from the above table data, for duplicate set {1,2,3}, the min(retail/mult) is 2. But records 2 & 3 have same value i.e. 2
In that case pick either of those records and delete the records 1,2 (or 3).
All this while it was pretty straight forward for which I was using the below delete statement.
DELETE FROM table_x a
WHERE ROWID >
(SELECT MIN (ROWID)
FROM table_x b
WHERE a.ID = b.ID
AND a.start_date = b.start_date
AND a.end_date = b.end_date
AND a.item_id = b.item_id);Due to sudden requirement changes I need to change my SQL.
So, experts please throw some light on how to get away from this hurdle.
Thanks,
Raj.Well, it was my mistake that I forgot to mention one more point in my earlier post.
Sentinel,
Your UPDATE perfectly works if I am updating only NEW_ID column.
But I have to update the STATUS_ID as well for these duplicate records.
ID START_DATE END_DATE ITEM_ID MULT RETAIL NEW_ID STATUS_ID | RETAIL / MULT
1 10/17/2008 1/1/2009 83 3 7 2 1 | 2.3333
2 10/17/2008 1/1/2009 83 2 4 | 2
3 10/17/2008 1/1/2009 83 2 4 2 1 | 2
4 10/31/2008 1/1/2009 89 3 6 7 1 | 2
5 10/31/2008 1/1/2009 89 4 10 7 1 | 2.5
6 10/31/2008 1/1/2009 89 4 10 7 1 | 2.5
7 10/31/2008 1/1/2009 89 6 6 | 1
8 10/17/2008 10/23/2008 124 3 6 | 2So if I have to update the status_id then there must be a where clause in the update statement.
WHERE ROW_NUM = 1
AND t2.id != t1.id
AND t2.START_DATE = t1.START_DATE
AND t2.END_DATE = t1.END_DATE
AND t2.ITEM_ID = t1.ITEM_IDInfact the entire where_ clause in the inner select statement must be in the update where clause, which makes it totally impossible as T2 is persistent only with in the first select statement.
Any thoughts please ?
I appreciate your efforts.
Definitely this is a very good learning curve. In all my experience I was always writing straight forward Update statements but not like this one. Very interesting.
Thanks,
Raj. -
SQL Query to retrieve one line from duplicate records
Hi
I have one table which contains duplicate records in multiple column but the difference is in one column which contains the value 0 or positive. The query i want is to retrieve only the line with the positive value for only the duplicated records.
here below a sample data for your reference:
CREATE TABLE TRANS
CALLTRANSTYPE NVARCHAR2(6),
ORIGANI NVARCHAR2(40),
TERMANI NVARCHAR2(40),
STARTTIME DATE,
STOPTIME DATE,
CELLID NVARCHAR2(10),
CONNECTSECONDS NUMBER,
SWITCHCALLCHARGE NUMBER
INSERT INTO TRANS VALUES ('REC','555988801','222242850',to_date('05/15/2012 09:15:00','mm/dd/yyyy hh24:mi:ss'),to_date('05/15/2012 09:15:25','mm/dd/yyyy hh24:mi:ss'),null,25,0)
INSERT INTO TRANS VALUES ('REC','555988801','222242850',to_date('05/15/2012 09:15:00','mm/dd/yyyy hh24:mi:ss'),to_date('05/15/2012 09:15:25','mm/dd/yyyy hh24:mi:ss'),null,25,18000)
INSERT INTO TRANS VALUES ('REC','555988801','222242850',to_date('05/15/2012 09:18:03','mm/dd/yyyy hh24:mi:ss'),to_date('05/15/2012 09:18:20','mm/dd/yyyy hh24:mi:ss'),null,17,0)
The output i want to have is:
CALLTRANSTYPE ORIGANI TERMANI STARTTIME STOPTIME CELLID CONNECTSECONDS SWITCHCALLCHARGE
REC 555988801 222242850 05/15/2012 09:15:00 05/15/2012 09:15:25 25 18000
REC 555988801 222242850 05/15/2012 09:18:03 05/15/2012 09:18:20 17 0 Thank you.Hi ekh
this is the query i want to have, thank you for the help:
SQL> Select *from
select CALLTRANSTYPE,ORIGANI,TERMANI,STARTTIME,STOPTIME,CELLID,CONNECTSECONDS,SWITCHCALLCHARGE
,row_number() over( partition by STARTTIME ,STOPTIME order by SWITCHCALLCHARGE DESC ) rn from TRANS
where rn=1;
CALLTR ORIGANI TERMANI STARTTIME STOPTIME CELLID CONNECTSECONDS SWITCHCALLCHARGE RN
REC 555988801 222242850 15-MAY-12 15-MAY-12 25 18000 1
REC 555988801 222242850 15-MAY-12 15-MAY-12 17 0 1Regrads
Lucienot. -
USE of PREVIOUS command to eliminate duplicate records in counter formula
i'm trying to create a counter formula to count the number of documents paid over 30 days. to do this i have to subtract the InvDate from the PayDate. and then create a counter based on this value. if {days to pay} is greater than 30 then 1 else 0.
then sum the {days to pay} field to each group. groups are company, month, and supplier.
becuase invoices can have multiple payments and payments can have multiple invoices. there is no way around having duplicate records for the field.
so my counter is distorted by by the duplicate records and my percentage of payments over 30 days formula will not be accurate do to these duplicates.
I've tried Distinct Count based on this formula if {days to pay} is greater than 30 then . and it works except that is counts 0.00 has a distinct records so my total is off 1 for summaries with a record that (days to pay} is less than or equal to 30.
if i subract 1 from the formula then it will be inaccurate for summaries with no records over 30 days.
so i'm come to this.
if Previous() do not equal
then
if {day to days} greater than 30
then 1
else 0.00
else 0.00
but it doesn't work. i've sorted the detail section by
does anyone have any knowledge or success using the PREVIOUS command in a report?
Edited by: Fred Ebbett on Feb 11, 2010 5:41 PMSo, you have to include all data and not just use the selection criteria 'PayDate-InvDate>30'?
You will need to create a running total on the RPDOC ID, one for each section you need to show a count for, evaluating for your >30 day formula.
I don't understand why you're telling the formula to return 0.00 in your if statement.
In order to get percentages you'll need to use the distinct count (possibly running totals again but this time no formula). Then in each section you'd need a formula that divides the two running totals.
I may not have my head around the concept since you stated "invoices can have multiple payments and payments can have multiple invoices". So, invoice A can have payments 1, 2 and 3. And Payment 4 can be associated with invoice B and C? Ugh. Still though, you're evaluating every row of data. If you're focus is the invoices that took longer than 30 days to be paid...I'd group on the invoice number, put the "if 'PayDate-InvDate>30' then 1 else 0" formula in the detail, do a sum on it in the group footer and base my running total on the sum being >0 to do a distinct count of invoices.
Hope this points you in the right direction.
Eric -
Formula to select first record on the column
Hi All
I need a formula to select a first record on the column, here is my query
SELECT DISTINCT
dbo.OWOR.DocNum, dbo.OWOR.ItemCode, dbo.OWOR.Status, dbo.OWOR.PlannedQty, dbo.ITM1.Price, dbo.OWOR.Warehouse,
dbo.OWOR.PlannedQty * dbo.ITM1.Price AS Total
FROM dbo.OWOR INNER JOIN
dbo.ITM1 ON dbo.OWOR.ItemCode = dbo.ITM1.ItemCode
WHERE (dbo.OWOR.Status = 'P') OR
(dbo.OWOR.Status = 'R') AND dbo.ITM1.Price
I need to select the first price on the price list for (dbo.ITM1.Price).
Regards
BonganiBongani,
Are you sure you don't want to link a pricelist? Because the unique key on ITM1 consists of ItemCode and PriceList and taking the first price for an item in ITM1 could result in kind of random prices for different items, depending on whice pricelist gets filled first for a certain item. -
Query to find out 2 columns duplicate records
hi
how can i find out duplicate records from a table with 2columns duplicated
eg: emp_id, book_id duplicated together
emp_id book_id
001 A
001 A
001 B
in this case query should return ( emp_id: 001 , book_id: A ) because these are duplicated togetherSQL> with t as
2 (
3 select 1 a,'A' b from dual
4 union all
5 select 1,'A' from dual
6 union all
7 select 1,'B' from dual
8 )
9 select distinct a,b
10 from (
11 select t.*,count(1) over (partition by a,b order by a) cnt
12 from t)
13 where cnt > 1
14 /
A B
1 A
SQL> with t as
2 (
3 select 1 a,'A' b from dual
4 union all
5 select 1,'A' from dual
6 union all
7 select 1,'B' from dual
8 )
9 select a,b
10 from t
11 group by a,b
12 having count(*) > 1
13 /
A B
1 A -
How to suppress duplicate records in rtf templates
Hi All,
I am facing issue with payment reason comments in check template.
we are displaying payment reason comments. Now the issue is while making batch payment we are getting multiple payment reason comments from multiple invoices with the same name and it doesn't looks good. You can see payment reason comments under tail number text field in the template.
If you provide any xml syntax to suppress duplicate records for showing distinct payment reason comments.
Attached screen shot, template and xml file for your reference.
Thanks,
Sagar.I have CRXI, so the instructions are for this release
you can create a formula, I called it cust_Matches
if = previous () then 'true' else 'false'
IN your GH2 section, right click the field, select format field, select the common tab (far left at the top)
Select the x/2 to the right of Supress in the formula field type in
{@Cust_Matches} = 'true'
Now every time the {@Cust_Matches} is true, the CustID should be supressed,
do the same with the other fields you wish to hide. Ie Address, City, etc. -
How to remove all duplicate values from a column
For some reason when a user is adding a record, it duplicates it three times. Why is that happening?
Since there is many how can I remove any records that contains a duplicate in a specific column?Is this happening for all lists in site collection or this is the only one?
Check on the list if there is any workflow attached. If yes then open the workflow in designer and check its logic it might be written to copy list items.
Investigate if there is an event receiver deployed in your site where it creates duplicate entries. There has to be some custom code running which is causing this duplication otherwise out of the box behavior of lists is never like this.
Please remember to click 'Mark as Answer'
if the reply answers your query or 'Upvote' if it helps you. -
Hi everyone,
I'm having a a little difficulty resolving a problem with a repeating field causing duplication of data in a report I'm working on, and was hoping someone on here can suggest something to help!
My report is designed to detail library issues during a particular period, categorised by the language of the item issued. My problem is that on the sql database that out library management system uses, it is possible for an item to have more than one language listed against it (some books will be in more than one language). When I list the loan records excluding the language data field, I get a list of distinct loan records. Bringing the language data into the report causes the loan record to repeat for each language associated with it, so if a book is both in English and French, it will cause the loan record to appear like this:
LOAN RECORD NO. LANGUAGE CODE
123456 ENG
123456 FRE
So, although the loan only occurred once I have two instances of it in my report.
I am only interested in the language that appears first and I can exclude duplicated records from the report page. I can also count only the distinct records to get an accurate overall total. My problem is that when I group the loan records by language code (I really need to do this as there are millions of loan records held in the database) the distinct count stops being a solution, as when placed at this group level it only excludes duplicates in the respective group level it's placed in. So my report would display something like this:
ENG 1
FRE 1
A distinct count of the whole report would give the correct total of 1, but a cumulative total of the figures calculated at the language code group level would total 2, and be incorrect. I've encountered similar results when using Running Totals evaluating on a formula that excludes repeated loan record no.s from the count, but again when I group on the language code this goes out of the window.
I need to find a way of grouping the loan records by language with a total count of loan records alongside each grouping that accurately reflects how many loans of that language took place.
Is this possible using a calculation formula when there are repeating fields, or do I need to find a way of merging the repeating language fields into one field so that the report would appear like:
LOAN RECORD LANGUAGE CODE
123456 ENG, FRE
Any suggestions would be greatly appreciated, as aside from this repeating language data there are quite a few other repeating database fields on the system that it would be nice to report on!
Thanks!if you create a group by loan
then create a group by language
place the values in the group(loan id in the loan header)
you should only see the loan id 1x.
place the language in the language group you should only see that one time
a group header returns the 1st value of a unique id....
then in order to calculate avoiding the duplicates
use manual running totals
create a set for each summary you want- make sure each set has a different variable name
MANUAL RUNNING TOTALS
RESET
The reset formula is placed in a group header report header to reset the summary to zero for each unique record it groups by.
whileprintingrecords;
Numbervar X := 0;
CALCULATION
The calculation is placed adjacent to the field or formula that is being calculated.
(if there are duplicate values; create a group on the field that is being calculated on. If there are not duplicate records, the detail section is used.
whileprintingrecords;
Numbervar X := x + ; ( or formula)
DISPLAY
The display is the sum of what is being calculated. This is placed in a group, page or report footer. (generally placed in the group footer of the group header where the reset is placed.)
whileprintingrecords;
Numbervar X;
X -
Check for duplicate record in SQL database before doing INSERT
Hey guys,
This is part powershell app doing a SQL insert. BUt my question really relates to the SQL insert. I need to do a check of the database PRIOR to doing the insert to check for duplicate records and if it exists then that record needs
to be overwritten. I'm not sure how to accomplish this task. My back end is a SQL 2000 Server. I'm piping the data into my insert statement from a powershell FileSystemWatcher app. In my scenario here if the file dumped into a directory starts with I it gets
written to a SQL database otherwise it gets written to an Access Table. I know silly, but thats the environment im in. haha.
Any help is appreciated.
Thanks in Advance
Rich T.
#### DEFINE WATCH FOLDERS AND DEFAULT FILE EXTENSION TO WATCH FOR ####
$cofa_folder = '\\cpsfs001\Data_pvs\TestCofA'
$bulk_folder = '\\cpsfs001\PVS\Subsidiary\Nolwood\McWood\POD'
$filter = '*.tif'
$cofa = New-Object IO.FileSystemWatcher $cofa_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
$bulk = New-Object IO.FileSystemWatcher $bulk_folder, $filter -Property @{ IncludeSubdirectories = $false; EnableRaisingEvents= $true; NotifyFilter = [IO.NotifyFilters]'FileName, LastWrite' }
#### CERTIFICATE OF ANALYSIS AND PACKAGE SHIPPER PROCESSING ####
Register-ObjectEvent $cofa Created -SourceIdentifier COFA/PACKAGE -Action {
$name = $Event.SourceEventArgs.Name
$changeType = $Event.SourceEventArgs.ChangeType
$timeStamp = $Event.TimeGenerated
#### CERTIFICATE OF ANALYSIS PROCESS BEGINS ####
$test=$name.StartsWith("I")
if ($test -eq $true) {
$pos = $name.IndexOf(".")
$left=$name.substring(0,$pos)
$pos = $left.IndexOf("L")
$tempItem=$left.substring(0,$pos)
$lot = $left.Substring($pos + 1)
$item=$tempItem.Substring(1)
Write-Host "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp" -fore green
Out-File -FilePath c:\OutputLogs\CofA.csv -Append -InputObject "in_item_key $item in_lot_key $lot imgfilename $name in_cofa_crtdt $timestamp"
start-sleep -s 5
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=PVSNTDB33; Initial Catalog=adagecopy_daily; Integrated Security=TRUE")
$conn.Open()
$insert_stmt = "INSERT INTO in_cofa_pvs (in_item_key, in_lot_key, imgfileName, in_cofa_crtdt) VALUES ('$item','$lot','$name','$timestamp')"
$cmd = $conn.CreateCommand()
$cmd.CommandText = $insert_stmt
$cmd.ExecuteNonQuery()
$conn.Close()
#### PACKAGE SHIPPER PROCESS BEGINS ####
elseif ($test -eq $false) {
$pos = $name.IndexOf(".")
$left=$name.substring(0,$pos)
$pos = $left.IndexOf("O")
$tempItem=$left.substring(0,$pos)
$order = $left.Substring($pos + 1)
$shipid=$tempItem.Substring(1)
Write-Host "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp" -fore green
Out-File -FilePath c:\OutputLogs\PackageShipper.csv -Append -InputObject "so_hdr_key $order so_ship_key $shipid imgfilename $name in_cofa_crtdt $timestamp"
Rich ThompsonHi
Since SQL Server 2000 has been out of support, I recommend you to upgrade the SQL Server 2000 to a higher version, such as SQL Server 2005 or SQL Server 2008.
According to your description, you can try the following methods to check duplicate record in SQL Server.
1. You can use
RAISERROR to check the duplicate record, if exists then RAISERROR unless insert accordingly, code block is given below:
IF EXISTS (SELECT 1 FROM TableName AS t
WHERE t.Column1 = @ Column1
AND t.Column2 = @ Column2)
BEGIN
RAISERROR(‘Duplicate records’,18,1)
END
ELSE
BEGIN
INSERT INTO TableName (Column1, Column2, Column3)
SELECT @ Column1, @ Column2, @ Column3
END
2. Also you can create UNIQUE INDEX or UNIQUE CONSTRAINT on the column of a table, when you try to INSERT a value that conflicts with the INDEX/CONSTRAINT, an exception will be thrown.
Add the unique index:
CREATE UNIQUE INDEX Unique_Index_name ON TableName(ColumnName)
Add the unique constraint:
ALTER TABLE TableName
ADD CONSTRAINT Unique_Contraint_Name
UNIQUE (ColumnName)
Thanks
Lydia Zhang -
How to delete duplicate records in all tables of the database
I would like to be able to delete all duplicate records in all the tables of the database. Many of the tables have LONG columns.
Thanks.Hello
To delete duplicates from an individual table you can use a construct like:
DELETE FROM
table_a del_tab
WHERE
del_tab.ROWID <> (SELECT
MAX(dups_tab.ROWID)
FROM
table_a dups_tab
WHERE
dups_tab.col1 = del_tab.col1
AND
dups_tab.col2 = del_tab.col2
)You can then apply this to any table you want. The only differences will be the columns that you join on in the sub query. If you want to look for duplicated data in the long columns themselves, I'm pretty sure you're going to need to do some PL/SQL coding or maybe convert them to blobs or something.
HTH
David -
Using PL/SQL in a formula column in Oracle Reports Builder.
Hi,
I need to SUM two record from the result of an SQL interrogation.
Here's what it looks like
function CF_1Formula return Number is
nTot1 NUMBER :=0;
nTot2 NUMBER :=0;
begin
select sum(:TOT1) into nTot1 from table(Q1) ;
select sum(:TOT2) into nTot2 from table(Q1) ;
return (nTot1 + nTot2);
end;I'm kind of new to formula column programming any link of interest would be appreciated.
The from table(Q1) part Q1 represents my SQL interrogation name and the group below it is G_MAIN.Hi Hong Kong King Kong,
From looking at that function name (and the group name): Is this an Oracle Reports generated function?
If so, there's also a dedicated Reports forum: Reports
By the way, I like your synonym for 'query'.
I'm sure I'll confuse some of my collegues tomorrow when I will mention 'database interrogation' instead of 'query'. ;)
edit
Doh...I should not underestimate the information that is posted in thread subjects.
Edited by: hoek on May 5, 2010 9:24 PM -
Formula Column help please - URGENT
I'm trying to create a formula column as follows:
function NO_REPLIESFormula return Number is
NOREPLY number;
begin
SELECT COUNT(reply) INTO NOREPLY
FROM letters
WHERE reply = 'N'
GROUP BY ltrtype, batch;
RETURN (NOREPLY);
end;
This PL/SQL compiles fine, but when I run the report, I get the following messages:
REP-1401 no_repliesformula FATAL PL/SQL error occured. ORA-01422 exact fetch returns more than requested number of rows.
If I remove the GROUP BY ltrtype, batch, I don't get the error messages, but the result I get is the total no_replies instead of the total no_replies for each ltrtype/batch grouping.
Could someone please help me with this?
Thank you.Hi irish,
I think i am not sure about what you are trying to say, but let me guess, You want the values to be return on the bases of "ltrtype, batch". Which mea that you want more then one values, i mean there can be more then one Groups based on ltrtype and batch. and you want to display these values with repective record???
If i am right, then there is a fault in your code, and that is , you are not specifing in your code that which value is to be diplayed with which record in this report. For that there must be ltrtype, batch colums displayed in the report, you must add those values in your Code in the query, i.e.
function NO_REPLIESFormula return Number is
NOREPLY number;
begin
SELECT COUNT(reply) INTO NOREPLY
FROM letters
WHERE reply = 'N' and ltrtype= :V_ltrtype and batch=:v_batch;
RETURN (NOREPLY);
end;
Where :V_ltrtype and :v_batch are the run time values of each records displayed in the report.
Remember that if you don't sepecify this then your code will return as many records as many distich values of ltrtype, batch. and your variable NOREPLY can hold only one value at a time. I hope you understand both, Solution and the logic behind the error.
Please correct me if i am wrong.
Thanks.
Mohib ur Rehman
Maybe you are looking for
-
I found some older posting on the internet for using wildcards in Smart Albums. Been trying to do this without any luck in iPhoto '11. The postings on the internet are for older versions of iPhoto so I'm just wondering if something has changed and th
-
When converting a Word document to pdf, is more metadata removed by printing to pdf rather than saving to pdf or converting to pdf?
-
Too many objects in overview screen- Cost center budget
Hey Guys I am trying to upload cost center budget with help of form based kp06. The system throws error that i cannot post more than 9999 cells at one time. is there a any workaround this? Thanks Ankur
-
I have been a BT customer for over 25 years and presently have Telephone, Broadband and BT TV all of which are due to be renewed on 4th Oct 2014. I do not remember ever having received any renewal notices from BT and I understand that I was probably
-
Two different table to be printed in smartform
Hi Abapers, I am having a requirement of printing the data from two internal table in the smarform. Since there is only one main window in the smartform. The data present in the first table is taken in to the main window. The da