Retreive non-Null values
Hi everybody
How to retreive the values those are not Null from a table??
I mean I want to display non-Null values from a column or set of columns(if possible)??
Thanks in advance
But I don't have any criteria ... But you have criteria - namely to retrieve not null values!
Maybe you need to be more clear (with sample in- and output) of what you'd actually like to achieve.
Similar Messages
-
Distinct Count of Non-null Values
I have a table that has one column for providerID and then a providerID in each of several columns if the provider is under a particular type of contract.
I need a distict count of each provider under each type of contract for every county in the US.
distinct count is almost always one more than the actual distict count because most counties have at least one provider that does not have a particular contract and the distict count counts the null value as a distict value.
I know I can alter the fields to have a zero for nulls, ask for a minimum count and then subtract 1 from the distict count if the minimum is zero, but I hope there is an easier way to figure distict counts of non-null values.
any suggestions?
Thanks,
JenniferHello,
*I need a distict count of each provider under each type of contract for every county in the US*
To the above requiremetn,
I will suggest the following approach.
Use group expert formula for country, contract and provider.
Now you will have the hierarchy to which level you want to apply distinct count. You can do it as suggested by ken hamady.
Regards
Usama -
I have a column of data and there are values and nulls how would I count just the values on a summary?
Everything I have tried give me the total number of rows not the non null values.....
tia
Roseno you did not say something wrong -- but when we included a case statement for another field in the sql ---
when we referenced the new field and tried to sum it in bi publisher gave us a 'Na' don't know why......
Rose -
LAG & LEAD functions... Any Way to Retrieve the 1st non-NULL Values?
My question is this... Has anyone found an elegant way of getting the LAG & LEAD functions to move to the 1st NON-NULL value within the partition, rather than simply using a hard-coded offset value?
Here's some test data...
IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
CREATE TABLE #temp (
BranchID INT NOT NULL,
RandomValue INT NULL,
TransactionDate DATETIME
PRIMARY KEY (BranchID, TransactionDate)
INSERT #temp (BranchID,RandomValue,TransactionDate) VALUES
(339,6, '20060111 00:55:55'),
(339,NULL, '20070926 23:32:00'),
(339,NULL, '20101222 10:51:35'),
(339,NULL, '20101222 10:51:37'),
(339,1, '20101222 10:52:00'),
(339,1, '20120816 12:02:00'),
(339,1, '20121010 10:36:00'),
(339,NULL, '20121023 10:47:53'),
(339,NULL, '20121023 10:48:08'),
(339,1, '20121023 10:49:00'),
(350,1, '20060111 00:55:55'),
(350,NULL, '20070926 23:31:06'),
(350,NULL, '20080401 16:34:54'),
(350,NULL, '20080528 15:06:39'),
(350,NULL, '20100419 11:05:49'),
(350,NULL, '20120315 08:51:00'),
(350,NULL, '20120720 11:48:35'),
(350,1, '20120720 14:48:00'),
(350,NULL, '20121207 08:10:14')
What I'm trying to accomplish... In this instance, I'm trying to populate the NULL values with the 1st non-null preceding value.
The LAG function works well when there's only a single null value in a sequence but doesn't do the job if there's more than a singe NULL in the sequence.
For example ...
SELECT
t.BranchID,
t.RandomValue,
t.TransactionDate,
COALESCE(t.RandomValue, LAG(t.RandomValue, 1) OVER (PARTITION BY t.BranchID ORDER BY t.TransactionDate)) AS LagValue
FROM
#temp t
Please note that I am aware of several methods of accomplishing this particular task, including self joins, CTEs and smearing with variables.
So, I'm not looking for alternative way of accomplishing the task... I'm wanting to know if it's possible to do this with the LAG function.
Thanks in advance,
Jason
Jason LongI just wanted to provide a little follow-up now that I had a little time to check up and digest Itzik’s article and tested the code posed by Jingyang.
Turns out the code posted by Jingyang didn’t actually produce the desired results but it did get me pointed in the right direction (partially my fault for crappy test data that didn’t lend itself to easy verification). That said, I did want to post the version
of the code that does produce the correct results.
IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
CREATE TABLE #temp (
BranchID INT NOT NULL,
RandomValue INT NULL,
TransactionDate DATETIME
PRIMARY KEY (BranchID, TransactionDate)
INSERT #temp (BranchID,RandomValue,TransactionDate) VALUES
(339,6, '20060111 00:55:55'), (339,NULL, '20070926 23:32:00'), (339,NULL, '20101222 10:51:35'), (339,5, '20101222 10:51:37'),
(339,2, '20101222 10:52:00'), (339,2, '20120816 12:02:00'), (339,2, '20121010 10:36:00'), (339,NULL, '20121023 10:47:53'),
(339,NULL, '20121023 10:48:08'), (339,1, '20121023 10:49:00'), (350,3, '20060111 00:55:55'), (350,NULL, '20070926 23:31:06'),
(350,NULL, '20080401 16:34:54'), (350,NULL, '20080528 15:06:39'), (350,NULL, '20100419 11:05:49'), (350,NULL, '20120315 08:51:00'),
(350,NULL, '20120720 11:48:35'), (350,4, '20120720 14:48:00'), (350,2, '20121207 08:10:14')
SELECT
t.BranchID,
t.RandomValue,
t.TransactionDate,
COALESCE(t.RandomValue,
CAST(
SUBSTRING(
MAX(CAST(t.TransactionDate AS BINARY(4)) + CAST(t.RandomValue AS BINARY(4))) OVER (PARTITION BY t.BranchID ORDER BY t.TransactionDate ROWS UNBOUNDED PRECEDING)
,5,4)
AS INT)
) AS RandomValueNew
FROM
#temp AS t
In reality, this isn’t exactly a true answer to the original question regarding the LAG & LEAD functions, being that it uses the MAX function instead, but who cares? It still uses a windowed function to solve the problem with a single pass at the data.
I also did a little additional testing to see if casting to BINARY(4) worked across the board with a variety of data types or if the number needed to be adjusted based the data… Here’s one of my test scripts…
IF OBJECT_ID('tempdb..#temp') IS NOT NULL DROP TABLE #temp
CREATE TABLE #Temp (
ID INT,
Num BIGINT,
String VARCHAR(25),
[Date] DATETIME,
Series INT
INSERT #temp (ID,Num,String,Date,Series) VALUES
(1, 2, 'X', '19000101', 1), ( 2, 3, 'XX', '19000108', 1),
(3, 4, 'XXX', '19000115', 1), ( 4, 6, 'XXXX', '19000122', 1),
(5, 9, 'XXXXX', '19000129', 1), ( 6, 13, 'XXXXXX', '19000205', 2),
(7, NULL, 'XXXXXXX', '19000212', 2),
(8, NULL, 'XXXXXXXX', '19000219', 2),
(9, NULL, 'XXXXXXXXX', '19000226', 2),
(10, NULL, 'XXXXXXXXXX', '19000305', 2),
(11, NULL, NULL, '19000312', 3), ( 12, 141, NULL, '19000319', 3),
(13, 211, NULL, '19000326', 3), ( 14, 316, NULL, '19000402', 3),
(15, 474, 'XXXXXXXXXXXXXXX', '19000409', 3),
(16, 711, 'XXXXXXXXXXXXXXXX', '19000416', 4),
(17, NULL, NULL, '19000423', 4), ( 18, NULL, NULL, '19000430', 4),
(19, NULL, 'XXXXXXXXXXXXXXXXXXXX', '19000507', 4), ( 20, NULL, NULL, '19000514', 4),
(21, 5395, NULL, '19000521', 5),
(22, NULL, NULL, '19000528', 5),
(23, 12138, 'XXXXXXXXXXXXXXXXXXXXXXX', '19000604', 5),
(24, 2147483647, 'XXXXXXXXXXXXXXXXXXXXXXXX', '19000611', 5),
(25, NULL, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 5),
(26, 27310, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
(27, 9223372036854775807, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
(28, NULL, NULL, '19000618', 6),
(29, NULL, 'XXXXXXXXXXXXXXXXXXXXXXXXX', '19000618', 6),
(30, 27310, NULL, '19000618', 6)
SELECT
ID,
Num,
String,
[Date],
Series,
CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(4))) OVER (ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,4) AS BIGINT) AS NumFill,
CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(4))) OVER (PARTITION BY t.Series ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,4) AS BIGINT) AS NumFillWithPartition,
CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(8))) OVER (ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,8) AS BIGINT) AS BigNumFill,
CAST(SUBSTRING(MAX(CAST(t.[Date] AS BINARY(4)) + CAST(t.Num AS BINARY(8))) OVER (PARTITION BY t.Series ORDER BY t.[Date] ROWS UNBOUNDED PRECEDING), 5,8) AS BIGINT) AS BIGNumFillWithPartition,
CAST(SUBSTRING(MAX(CAST(t.ID AS BINARY(4)) + CAST(t.String AS BINARY(255))) OVER (ORDER BY t.ID ROWS UNBOUNDED PRECEDING), 5,255) AS VARCHAR(25)) AS StringFill,
CAST(SUBSTRING(MAX(CAST(t.ID AS BINARY(4)) + CAST(t.String AS BINARY(25))) OVER (PARTITION BY t.Series ORDER BY t.ID ROWS UNBOUNDED PRECEDING), 5,25) AS VARCHAR(25)) AS StringFillWithPartition
FROM #Temp AS t
Looks like BINARY(4) is just fine for any INT or DATE/DATETIME values. Bumping it up to 8 was need to capture the largest BIGINT value. For text strings, the number simply needs to be set to the column size. I tested up to 255 characters without a problem.
It’s not included here, but I did notice that the NUMERIC data type doesn’t work at all. From what I can tell, SS doesn't like casting the binary value back to NUMERIC (I didn't test DECIMAL).
Thanks again,
Jason
Jason Long -
Counting number of non NULL values in a list
Given a list of 5 columns, how can I find out how many are not null?
Eg.
select '&Parm1','&Parm2','&Parm3','&Parm4','&Parm5', number_of_values_not_null('&Parm1','&Parm2','&Parm3','&Parm4','&Parm5')
from Table
where Criteria
would give me:
Parm1: A
Parm2: B
Parm3:
Parm4:
Parm5:
A, B, , , , 2
ThanksNVL2 might be slightly shorter, TABLE () might be slightly more intuitive. Probably six of one...
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
SQL> SELECT NVL2 ('A', 1, 0) +
2 NVL2 ('B', 1, 0) +
3 NVL2 (NULL, 1, 0) +
4 NVL2 (NULL, 1, 0) +
5 NVL2 (NULL, 1, 0) result
6 FROM DUAL;
RESULT
2
SQL> CREATE OR REPLACE TYPE varchar2_table AS TABLE OF VARCHAR2 (4000);
2 /
Type created.
SQL> SELECT COUNT (*)
2 FROM TABLE (varchar2_table ('A', 'B', NULL, NULL, NULL))
3 WHERE column_value IS NOT NULL;
COUNT(*)
2
SQL> -
How to avoid the null values from xml publisher.
I am creating a report which have the claim numbers with the values CLA001,CLA111,null, null . when i preview my report it is showing some spaces for null values also. How can i avoid the spaces from the report.
I am giving for loop for the claim numbers in the template.
<?for-each:ROW?> <?sort:CLAIMNUMBER;'ascending';data-type='text'?>
<?CLAIMNUMBER?>
<?end for-each?>
Please help me out to solve this problem.
Thanks,
vasanth.Hi Sheshu,
According to your description, you are experiencing the null values and infinity values when browser the calculated measure, right?
Based on my research, the issue is caused by that dividing a non-zero or non-null value by zero or null. In this cases, we need to check for division by zero to avoid this situation. Here is the sample query for you reference.
IIF(
Measures.[Measure B]=0,null,
Measures.[Measure A] / Measures.[Measure B]
If you have any questions, please feel free to ask.
Regards,
Charlie Liao
TechNet Community Support -
Exclude NULL values from SUM and AVG calculation
Hi,
I have column in report that contains some NULL values. When i perform SUM,MAX,MIN or AVG calculation on this column the NULL values are treated as '0' and included in calculation. Is there any way to exclude them while calculating aggregate functions?
As a result MIN calculation on values (NULL,0.7,0.5,0.9) gives me output as 0 when it should have been 0.5
Can someone please help ?
Thanks and Regards,
Oliver D'melloHi Oliver,
According to your description, you want to ignore the NULL values when you perform aggregation functions.
In this scenario, aggregate functions always ignore the NULL values, because their operation objects are non-null values. So I would like to know if you have assigned “0” for NULL values. I would appreciate it if you could provide some screenshots about
your expressions or reports.
Besides, we have tested in our environment using Min() function. The expression returns the minimum value among the non-null numeric values. Please refer to the screenshots below:
Reference:
Min Function (Report Builder and SSRS)
Aggregate Functions Reference (Report Builder and SSRS)
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu -
Problem in summation on a column with possible null values
Hi,
I want to do summation on a column.
If I use <?sum(amount)?>, if there is any null value,its giving NaN as output.
From the forum I got the below syntax
<?sum(AMOUNT[number(.)!='NaN'])?>
but it is also not giving me the expected result. Its always displays 0.
I want some thing like sum(NVL(amount,0)). Could some body please help me out?
Thanks in Advance,
ThiruIf the column has many, many null values, and you want to use the index to identify the rows with non-null values, this is a good thing, as a B*Tree index will not index the nulls at all, so, even though your table may be very large, with many millions of rows, this index will be small and efficient, cause it will only contain index entries for those rows where the column is not null.
Hope that helps,
-Mark -
Clarification needed on the behaviour of count with null values
Hi friends,
I am confused about the result given by the count aggregate function when dealing with null. Please can anybody clarify about that to me. Here is what i tried
CREATE TABLE Demo ( a int);
INSERT INTO Demo VALUES
(1),
(2),
(NULL),
(NULL);
SELECT COUNT(COALESCE(a,0)) FROM Demo WHERE a IS NULL; -- Returns 2
SELECT COUNT(ISNULL(a,0)) FROM Demo WHERE a IS NULL; -- Returns 2
SELECT COUNT(*) FROM Demo WHERE a IS NULL; -- Returns 2
SELECT COUNT(a) FROM Demo WHERE a IS NULL; -- Returns 0
Please can anybody explain me why the result is same for the first three select statements and why not for the last select statement. And what does COUNT(*) actually mean?
With Regards
Abhilash D KThere is a difference to the logic when using a column name versus "*" - which is explained in the documentation (and reading it is the first thing you should do when you do not understand a particularly query/syntax). When you supply a column
(or expression) to the count function, only the non-null values are counted. Isnull and coalesce will obviously replace a NULL values, therefore the result of the expression will be counted.
1 and 2 are effectively the same - you replace a null value in your column with 0 and each query only selects rows with a null column value. The count of non-null values of your expression is therefore 2 - the number of rows in your resultset.
3 is the number of rows in the resultset since you supplied "*" instead of a column. The presence of nulls is irrelevant as documented.
4 is the number of non-null values in the resultset since you DID supply a column. Your resultset had no non-null values, therefore the count is zero. -
Suppressing NuLL values in Crystal Report
Hi....
I m facing a problem,i use asp.net and I have designed a Crystal Report.I had to place Text Objects in the the details section along with the corresponding Records from Database as per the Reports Design needed. Now i want that the NULL values in the report should be suppressed and should be disappeared along with the static Text Objects. Like as it happens in Grid view in ASP .net
Please Help me to solve this issue..
Thank youI think i have been able to do it with little compliated suppress logic.
When i design the report this is how it looks in design mode:
Line1 Field1
Line2 Field2
Line3 Field3
and when Field 2 is null or contains a blank string the output should look like:
Line1 Field1
Line3 Field3
1. Initially just design the text fields and db fields without any suppress.
2. Check Suppress for Line 2 and use the formula: isnull({Table.Field2}) . Do nothing to Field2. This should suppress Line2 when Field2 is null, you can also add StrCmp({Table.Field2}, "") = 0 for checking blank strings.
3. Now copy Object Line3 and Field3 and place on top of Line2 and Field2 respectively so that their positions match. Let these newly copied objects be Line3_1 and Field3_1 respectively.
4. Line3_1 and Field3_1 should be suppressed if Field2 contains a non null value. So for both of them Click Suppress checkbox and add the following in the format formula editor not isnull({Table.Field2})
5. If Line3_1 and Field3_1 are visible = Field2 is null\empty -> Line3 and Field3 should be suppressed or the output would be like:
Line1 Field1
Line3 Field3
Line3 Field3
So to remove the duplicate:
For both of them Click Suppress checkbox and add the following in the format formula editor isnull({Table.Field2})
Hope this helps. I will see if I can attach a sample report based on xtreme here. -
Plot empty point in line chart with previous non empty value
Hello,
I have a problem to plot series data in SSRS line chart, with the empty point, I don't want use average and zero provided by the report builder, I want use the last non empty data to fill the empty point, tried to use expression =Previous(Field!Value), no
luck, any one have some good idea?
P.S. do not want to use query to fill the null with previous non null value, just from the performance point view. at last , the chart should have some line as square wave with different height, if I use average for empty point, it shows slop wave line which
is not reflect the real production.
Thanks
RichardHi Richard,
In Reporting Services, if the chart type is a linear chart type (bar, column, scatter, line, area, range), null values are shown on the chart as empty spaces or gaps between data points in a series. By default, empty points are calculated by taking the average
of the previous and next data points that are not null.
If we want to use previous value to replace the empty value, please refer to the following steps:
Right-click the field which displayed in Y axis (Height) to open the Series Properties.
In the Value field to modify the expression to look like this:
=iif(isnothing(Sum(Fields!Height.Value)),previous(sum(Fields!Height.Value)),sum(Fields!Height.Value))
The following screenshot is for your reference:
If there are any other questions, please feel free to ask.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
DataSet/DataGrid/Null values
Strange thing - when I bind a dataset to a datagrid and the
dataset column has null values, I cannot edit it. If I try to edit
the value, it just returns to the previous value when I press the
Enter key.
If all the rows start out with a non-null value, I can edit
them without any problem. Any thoughts?
TIAHello
While that is true for a unique index on columns where all values are null, it is not the case where one of the values is not null:
SQL> CREATE TABLE dt_test_nulls (id number, col1 varchar2(1))
2 /
Table created.
SQL> CREATE UNIQUE INDEX dt_test_nulls_i1 on dt_test_nulls(id)
2 /
Index created.
SQL> insert into dt_test_nulls values(null,'Y')
2 /
1 row created.
SQL> insert into dt_test_nulls values(null,'N')
2 /
1 row created.
SQL> create unique index dt_test_nulls_i2 on dt_test_nulls(id,col1)
2 /
Index created.
SQL> insert into dt_test_nulls values(null,'N')
2 /
insert into dt_test_nulls values(null,'N')
ERROR at line 1:
ORA-00001: unique constraint (BULK1.DT_TEST_NULLS_I2) violated
SQL> insert into dt_test_nulls values(null,null)
2 /
1 row created.
SQL> insert into dt_test_nulls values(null,null)
2 /
1 row created.I just thought it was worth pointing out.
HTH
David
Message was edited by:
david_tyler -
Got multiple values for non null local custom field
Hi,
I get the following error message while saving a MPP from Project Professional to MS Project Server:
Got multiple values for non null local custom field.
I checked the MPP and found that there are fields with same alias as Enterprise field names. However, these fields are at a Task level, whereas the Enterprise fields are at a Project Level.
I would like to know why this is happening and the resolution for this issue. I don't want to delete the local fields.
Any help in this regard will be appreciated.Then try to find any inconsistencies in the project plans with the issues, like required values not entered. Also try to save the plan as XML format and save it back as an mpp file to see if it helps (be aware that any formatting will be lost).
Hope this helps,
Guillaume Rouyre, MBA, MVP, P-Seller | -
Index (or not) for excluding NULL values in a query
Hello,
I have table that can become very large. The table has a varchar2 column (let's call it TEXT) that can contain NULL values. I want to process only the records that have a value (NOT NULL). Also, the table is continuously expanded with newly inserted records. The inserts should suffer as little performance loss as possible.
My question: should I use an index on the column and if so, what kind of index?
I have done a little test with a function based index (inspired by this Tom Kyte article: http://tkyte.blogspot.com/2006/01/something-about-nothing.html):
create index text_isnull_idx on my_table(text,0);
I notice that if I use the clause WHERE TEXT IS NULL, the index is used. But if I use a clause WHERE TEXT IS NOT NULL (which is the clause I want to use), a full table scan is performed. Is this bad? Can I somehow improve the speed of this selection?
Thanks in advance,
FransI build a test case with very simple table with 2 columns and it shows that FTS is better than index access even when above ratio is <= 0.01 (1%):
DROP TABLE T1;
CREATE TABLE T1
C1 VARCHAR2(100)
,C2 NUMBER
INSERT INTO T1 (SELECT TO_CHAR(OBJECT_ID), ROWNUM FROM USER_OBJECTS);
BEGIN
FOR I IN 1..100 LOOP
INSERT INTO T1 (SELECT NULL, ROWNUM FROM USER_OBJECTS);
END LOOP;
END;
CREATE INDEX T1_IDX ON T1(C1);
ANALYZE TABLE T1 COMPUTE STATISTICS
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS
SET AUTOTRACE TRACEONLY
SELECT
C1, C2
FROM T1 WHERE C1 IS NOT NULL;
3864 rows selected.
real: 1344
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=3864 Bytes=30912)
1 0 TABLE ACCESS (FULL) OF 'T1' (Cost=59 Card=3864 Bytes=30912)
Statistics
0 recursive calls
0 db block gets
2527 consistent gets
3864 rows processed
BUT
SELECT
--+ FIRST_ROWS
C1, C2
FROM T1 WHERE C1 IS NOT NULL;
3864 rows selected.
real: 1296
Execution Plan
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=35 Card=3864 Bytes=30912)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=35 Card=3864 Bytes=30912)
2 1 INDEX (FULL SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=11 Card=3864)
Statistics
0 recursive calls
0 db block gets
5052 consistent gets
3864 rows processed
and just for comparison:
SELECT * FROM T1 WHERE C1 IS NULL;
386501 rows selected.
real: 117878
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=386501 Bytes=3092008)
1 0 TABLE ACCESS (FULL) OF 'T1' (Cost=59 Card=386501 Bytes=3092008)
Statistics
0 recursive calls
0 db block gets
193850 consistent gets
386501 rows processedHence you have to benchmark you queries with and w/o index[es] -
How to validate if a column have NULL value, dont show a row with MDX
Hello,
I have this situation, I have a Result from MDX that return rows with values NULL on columns, I tried with NON EMPTY and NONEMPTY but the result is the same. That I want to do is validate if a column have a Null value discard the row, but I dont know how
to implement it, could somebody help me?, please.
Thanks a lot.
Sukey Nakasima
Sukey NakasimaHello,
I found the answer in this link https://social.technet.microsoft.com/Forums/sqlserver/en-US/f9c02ce3-96b2-4cd6-921f-3679eb22d790/dont-want-to-cross-join-with-null-values-in-mdx?forum=sqlanalysisservices
Thanks a lot.
Sukey Nakasima
Sukey Nakasima
Maybe you are looking for
-
Hi Experts, I defined date as a key figure, AAAAMMGG,Iu2019m able to view data in my ODS, but when I try to create a query data format is wrong. Can anybody share a solution. Thanks in advance Namrata
-
Intercompany Third Party Subcontracting
Hi everybody, our case is an Inter company third party subcontracting. Plant A in company code 1000 sends Component X to Plant B in company code 2000. Plant A asks his third party to send Component Y to Plant B. Plant B assemble X and Y and deliver f
-
Having trouble with EM in a windows server environment running 10G When i startup/shutdown my server do i need to stop the listener, dbconsole and agent? Everytime i reboot my TEST server EM stops working. I have checked the windows services start ok
-
FCP X 10.1.2 keeps crashing when I click on a file to import
I've recently updated to the new version of FCP X 10.1.2 and every time I try to import a new file into a library, Final Cut crashes as soon as I even click on the file. This happens before I even attempt to import it. The files I am trying to impo
-
Ringer won't work when getting a call
ringer won't sound when receiving a call