Rp_provide_from_last returns incorrect result
Hi
When issuing rp_provide_from_last for IT2001, we get incorrect result.
rp-provide-from-last p2001 space '19000101' '99991231'
This macro does not return the latest record. Instead it returns the record with the highest subtype #. (It actually returns the last record shown in a SE16N listing of PA2001).
Has anyone seen this problem?
We are on SAP 4.7., SP 85.
Best regards
Kirsten
Pleas Try this
Usage:
Only in PNP database reports under GET PERNR, because the personnel number for which data is being read comes from field PERNR-PERNR, while the field being used is PNP-SW-AUTH-SKIPPED-RECORD.
(RP_READ_ALL_TIME_ITY beg end)
DATA: BEGDA LIKE P2001-BEGDA, ENDDA LIKE P2001-ENDDA.
INFOTYPES: 0000, 0001, 0002, ...
2001 MODE N, 2002 MODE N, ...
GET PERNR.
BEGDA = '19900101'. ENDDA = '19900131'.
RP_READ_ALL_TIME_ITY BEGDA ENDDA.
IF PNP-SW-AUTH-SKIPPED-RECORD NE '0'.
WRITE: / 'Authorization for time data missing'.
WRITE: / 'for personnel number', PERNR-PERNR. REJECT.
ENDIF.
Remarks
This RMAC module can be used when, for example, the time infotypes were originally defined in MODE N. This was done because the time data (from LOW-DATE to HIGH-DATE) might not all have fitted into the buffer. Now, however, they are read with shorter intervals (for example, in RPCALCx0 with payroll periods).
-Due to the large amount of data in HR, the infotypes 2000 u2013 2999 should not be read when GET PERNR occurs. Therefore, these infotypes are declared with the enhancement MODE N.
-As a result, the infotype tables under GET PERNR are not filled. The time infotype tables are filled subsequently using the macro RP_READ_ALL_TIME_ITY, but only for the time interval specified by PN-BEGDA and PN-ENDDA.
http://help.sap.com/saphelp_45b/helpdata/en/60/d8bb88576311d189270000e8322f96/content.htm
Best Regards
Similar Messages
-
SDO_DISTANCE returning incorrect results
I have a query that uses sdo_distance to find N nearest neighbors (line strings) to a lat/lon (point). The problem is that sdo_distance is giving me geometries which it says are 0 units (in this case meters) from that lat/lon when in fact they are much farther away.
here is an example:
SELECT
vw.reach_geom,
round(SDO_GEOM.SDO_DISTANCE(vw.REACH_GEOM, sdo_geometry(2001, 8307,
sdo_point_type(-88.23579545454545, 44.87982954545455, NULL),
NULL, NULL), 0.00005, 'unit=M'),4) Dist_In_Meters
FROM all_geom_vw
ORDER BY Dist_In_Meters;
i get two geometries. the first is incorrect:
GEOM
SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(-86.687042, 45.836369, -86.687042, 45.836369, -86.689133, 45.836567))
Dist_In_Meters
0
The second one is the correct one:
GEOM
SDO_GEOMETRY(2002, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(-88.380611, 44.996681, -88.380718, 44.995281, -88.380917, 44.993782,.....several other ordinates)
Dist_In_Meters
15.3559
you can tell that the second one is much closer to the point since that lat/lon coords are pretty much the same.
Any ideas what's happening here?
thanks,
JohnThe first two points of the bad result geometry are the same point. "-86.687042, 45.836369, -86.687042, 45.836369," I'm pretty sure this will hose many spatial operations and may not throw an exception. Edit the geometry or do a "migrate to current" and it should work.
-Ted -
MDX used in Universe for Prompt returns Incorrect Result
Hi All,
I have created an optional predefined filter in the universe like so... (and for many other characteristics too...)
<OPTIONAL>
<FILTER KEY="[0COMP_CODE].[LEVEL01]">
<CONDITION OPERATORCONDITION="InList">
<CONSTANT CAPTION="@Prompt('Select Company Code(s)','A','Company code\Company code',multi,constrained)">
</CONSTANT>
</CONDITION>
</FILTER>
</OPTIONAL>
The code works i.e The prompt comes up, the text is correct, the list of values are correct. BUT....
This is a multi value prompt i.e In List... So when I select only 1 Comany code i.e CODE-1, run the Webi report, the result is as expected, the reports displays all the data for Company code - CODE-1.
When I refersh the report and add CODE-2, so the list now has CODE-1 and CODE-2 in the prompt... only the data for CODE-1 is displayed in the report, and CODE-2's data is not returned.
I also find that when I select CODE-2 first, then CODE-1, then only CODE-2's data is returned, so I can safely say that only the first selected Company code is returned in the Webi report irrespective of how many Company codes where selected in the list.
This behaviour also affects any other characteristic on which filter is created in this manner.
Has anyone experienced this behaviour? Is there something on the BW side that needs to be checked? or is this a bug?
Thanks
JHi Uwe,
Yes... the issue is that we need to use short XML tags. The issue is also documented in one of the release notes.
In the example below, you can see the </CONSTANT> tag is used. This is what is causing the issue.
<OPTIONAL>
<FILTER KEY="[0COMP_CODE].[LEVEL01]">
<CONDITION OPERATORCONDITION="InList">
<CONSTANT CAPTION="@Prompt('Select Company Code(s)','A','Company code\Company code',multi,constrained)">
</CONSTANT>
</CONDITION>
</FILTER>
</OPTIONAL>
So... to ge the results correctly, we need to remove the </CONTSTANT> tag and close it at the end of the prompt line with a /, now the same code looks like this...
<OPTIONAL>
<FILTER KEY="[0COMP_CODE].[LEVEL01]">
<CONDITION OPERATORCONDITION="InList">
<CONSTANT CAPTION="@Prompt('Select Company Code(s)','A','Company code\Company code',multi,constrained)"/>
</CONDITION>
</FILTER>
</OPTIONAL>
I have tested this using other operators and it works fine.
Jacques -
Like in stored procedure returns incorrect result
Hello all,
I have a stored procedure like below
Alter PROCEDURE ContactsListBySearch
@AuthorID int,
@currentPage INT,
@pageSize INT,
@searchStr nvarchar
AS
BEGIN
set nocount on;
WITH tempLog AS (
SELECT distinct ROW_NUMBER()OVER (ORDER BY email DESC) AS Row,
email,username from AddContact where userid = @AuthorID and email like '%'+@searchStr+'%' and username like '%'+@searchStr+'%' )
SELECT email,username
FROM tempLog
WHERE Row between ((@currentPage - 1) * @pageSize + 1) and (@currentPage*@pageSize)
END
for a search string david it gives me unrelated rows.
But for the same query string david if i run the query
select email,username from addcontact where userid=2 and email like '%david%' and username like '%david%'
It gives me exact result.
How to pass an paramter as a string in the stored procedure?? please help me.
regards,
Guru
Edited by: user4554966 on Jan 18, 2010 6:45 PMDid not get any help from them ;)
Regards,
Guru -
Very strange bug with compareTo: returning incorrect results
Hello everyone! I have used the method compareTo many times to maintain my database project's entries. However recently I have discovered a bug, in which:
a and b both being Storage type objects:
private static class Storage {
object data
int nextData //an array stores the Storage objects
int previousData //this uses an integer to locate previous in array
Problem:
((Comparable)(a.data)).compareTo((Comparable)(b.data)) returns a 3, when a.data is clearly 5, and b.data is clearly 20.
This is very strange, as the compareTo should return a -1 instead of a positive number. Is this a known bug with the compareTo method? I have been using it reliably for many programs but this is the first time it ever occurs to me.Problem:
((Comparable)(a.data)).compareTo((Comparable)(b.data))
returns a 3, when a.data is clearly 5, and b.data is
clearly 20.
This is very strange, as the compareTo should return a
-1 instead of a positive number. Is this a known bug
with the compareTo method? I have been using it
reliably for many programs but this is the first time
it ever occurs to me.Not really... comparing Strings "5" to "20"
is really the same as comparing "5" to "2"
being the first character of the string, difference
being, surprise, surprise = 3.
If you want the Strings to be comparable that way you should left-fill them with spaces or zeros.
Or, better still, override compareTo and make your Storage implement Comparable instead of all that casting of the data objects. -
Calling WSAGetLastError() from an IOCP thread return incorrect result
I have called WSARecv() which returned WSA_IO_PENDING. I have then sent an RST packet from the other end. The GetQueuedCompletionStatus() function which exists in another thread has returned FALSE as expected, but when I called WSAGetLastError() I got
64 instead of WSAECONNRESET.
So why WSAGetLastError() did not return WSAECONNRESET?Hi Tom_912,
Thanks for posting in MSDN forum.
Where do you call WSAGetLastError and get 64? From your description, It seems that you call WSAGetLastError after GetQueuedCompletionStatus() function.
Read the document:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa364986(v=vs.85).aspx
We should call GetLastError to get extended error information instead of WSAGetLastError function. For the error code 64
ERROR_NETNAME_DELETED, It means that the specified network name is no longer available.
https://msdn.microsoft.com/en-us/library/windows/desktop/ms681382(v=vs.85).aspx it's likely that your connection has gone away since GetQueuedCompletionStatus return false. Is that as your expect? If not, please provide more details about what
you do, and it would be better if you could share us some code.
Best regards,
Shu Hu
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
SQL Server 2014 - Columnstore Incorrect Results
Hello,
we are running into a problem with SQL Server 2014 and the columnstore index. We have a partitioned table with about 300 Million records in it. With SQL Server 2012 this has been in use without problems.
Since we upgraded to SQL Server 2014, the exact same queries on exactly the same data return incorrect results. We can only bypass the problem by either dropping the CS Index or adding a maxdop = 1 query hint.
I thought this was an old bug in SQL Server 2012? We have not installed the CU Pack 4 for SQL Server 2014, yet but will it solve the problem (assuming others have faced the same problem)
We are running: Microsoft SQL Server 2014 Enterprise Edition - 12.0.2000.8 (X64) on a 2x6Core Machine
Thanks in advance!SQL Server 2012 only featured non-clustered columnstore indexes which were separate structures. Have you changed to a clustered columnstore (cs) index in SQL Server 2014? (ie dropped your non-clustered cs, created a cs)
There are a number of fixes that reference columnstore indexes in the current CUs (
CU1,
CU2, CU3,
CU4 ), but none which sound exactly like your problem.
This sounds similar and is fixed in CU1. You should review the CU documents yourself to see if any of them mention a similar problem and then consider applying the CU. You might also try applying them to a test environment, or a temporary Azure
VM for example to see if one of them solves your problem.
If you can create a reliable "repro" of the problem, consider raising a
connect item which is a Microsoft bug report. -
LessFilter and ReflectionExtractor API giving incorrect results
I am using Oracle Coherence version 3.7. We are storing DTO objects in cache having "modificationTime" property/instance variable of "java.util.date" type. In order to fetch data from cache passing "java.util.date" variable as input for comparison, LessFilter and ReflectionExtractor api's are used. Cache.entryset(filter) returns incorrect results.
Note: we are using "com.tangosol.io.pof.PofWriter.writeDateTime(int arg0, Date arg1) " api to store data in cache and "com.tangosol.io.pof.PofReader.readDate(int arg0)" to read data from cache. There is no readDateTime api available ?
We tested same scenario updating DTO class. Now it has another property in DTO of long(to store milliseconds). Now long is passed as input for comparison to LessFilter and ReflectionExtractor api's and correct results are retrieved.
Ideally, java.util.Date or corresponding milliseconds passed as input should filter and return same and logically correct results.
Code:
1) Test by Date: returns incorrect results
public void testbyDate(final Date startDate) throws IOException {
final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
final Filter lessFilter = new LessFilter(extractor, startDate);
final Set results = CACHE.entrySet(lessFilter);
LOGGER.debug("Fetched Records:" + results.size());
assert results.isEmpty();
2) Test by milliseconds: returns correct results
public void testbyTime(final Long time) throws IOException {
final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
final Filter lessFilter = new LessFilter(extractor, time);
final Set results = CACHE.entrySet(lessFilter);
LOGGER.debug("Fetched Records:" + results.size());
assert results.isEmpty();
}Hi Harvy,
Thanks for your reply. You validated it against a single object in cache using ExternalizableHelper.toBinary/ExternalizableHelper.fromBinary. But we are querying against a collection of objects in cache.
Please have a look at below code.
*1)* We are using TestDTO.java extending AbstractCacheDTO.java as value object for our cache.
import java.io.IOException;
import java.util.Date;
import com.tangosol.io.AbstractEvolvable;
import com.tangosol.io.pof.EvolvablePortableObject;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
* The Class AbstractCacheDTO.
* @param <E>
* the element type
* @author apanwa
public abstract class AbstractCacheDTO<E> extends AbstractEvolvable implements EvolvablePortableObject {
/** The Constant IDENTIFIER. */
private static final int IDENTIFIER = 0;
/** The Constant CREATION_TIME. */
private static final int CREATION_TIME = 1;
/** The Constant MODIFICATION_TIME. */
private static final int MODIFICATION_TIME = 2;
/** The version number of cache DTO implementation **/
private static final int VERSION = 11662;
/** The id. */
private E id;
/** The creation time. */
private Date creationTime = new Date();
/** The modification time. */
private Date modificationTime;
* Gets the id.
* @return the id
public E getId() {
return id;
* Sets the id.
* @param id
* the new id
public void setId(final E id) {
this.id = id;
* Gets the creation time.
* @return the creation time
public Date getCreationTime() {
return creationTime;
* Gets the modification time.
* @return the modification time
public Date getModificationTime() {
return modificationTime;
* Sets the modification time.
* @param modificationTime
* the new modification time
public void setModificationTime(final Date modificationTime) {
this.modificationTime = modificationTime;
* Read external.
* @param reader
* the reader
* @throws IOException
* Signals that an I/O exception has occurred.
* @see com.tangosol.io.pof.PortableObject#readExternal(com.tangosol.io.pof.PofReader)
@Override
public void readExternal(final PofReader reader) throws IOException {
id = (E) reader.readObject(IDENTIFIER);
creationTime = reader.readDate(CREATION_TIME);
modificationTime = reader.readDate(MODIFICATION_TIME);
* Write external.
* @param writer
* the writer
* @throws IOException
* Signals that an I/O exception has occurred.
* @see com.tangosol.io.pof.PortableObject#writeExternal(com.tangosol.io.pof.PofWriter)
@Override
public void writeExternal(final PofWriter writer) throws IOException {
writer.writeObject(IDENTIFIER, id);
writer.writeDateTime(CREATION_TIME, creationTime);
writer.writeDateTime(MODIFICATION_TIME, modificationTime);
@Override
public int getImplVersion() {
return VERSION;
import java.io.IOException;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
* @author nkhatw
public class TestDTO extends AbstractCacheDTO<TestIdentifier> {
private Long timeinMillis;
private static final int TIME_MILLIS_ID = 3;
@Override
public void readExternal(final PofReader reader) throws IOException {
super.readExternal(reader);
timeinMillis = Long.valueOf(reader.readLong(TIME_MILLIS_ID));
@Override
public void writeExternal(final PofWriter writer) throws IOException {
super.writeExternal(writer);
writer.writeLong(TIME_MILLIS_ID, timeinMillis.longValue());
* @return the timeinMillis
public Long getTimeinMillis() {
return timeinMillis;
* @param timeinMillis
* the timeinMillis to set
public void setTimeinMillis(final Long timeinMillis) {
this.timeinMillis = timeinMillis;
}*2)* TestIdentifier.java as key in cache for storing TestDTO objects.
import java.io.IOException;
import org.apache.commons.lang.StringUtils;
import com.tangosol.io.AbstractEvolvable;
import com.tangosol.io.pof.EvolvablePortableObject;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
* @author nkhatw
public class TestIdentifier extends AbstractEvolvable implements EvolvablePortableObject {
private String recordId;
/** The Constant recordId. */
private static final int RECORD_ID = 0;
/** The version number of cache DTO implementation *. */
private static final int VERSION = 11660;
@Override
public void readExternal(final PofReader pofreader) throws IOException {
recordId = pofreader.readString(RECORD_ID);
@Override
public void writeExternal(final PofWriter pofwriter) throws IOException {
pofwriter.writeString(RECORD_ID, recordId);
@Override
public int getImplVersion() {
return VERSION;
@Override
public boolean equals(final Object object) {
if (object instanceof TestIdentifier) {
final TestIdentifier id = (TestIdentifier) object;
return StringUtils.equals(recordId, id.getRecordId());
} else {
return false;
* @see java.lang.Object#hashCode()
@Override
public int hashCode() {
return recordId.hashCode();
* @return the recordId
public String getRecordId() {
return recordId;
* @param recordId
* the recordId to set
public void setRecordId(final String recordId) {
this.recordId = recordId;
}*3) Use Case*
We are fetching TestDTO records from cache based on LessFilter. However, results returned from cache differs if query is made over property "getModificationTime" of type java.util.Date or over property "getTimeinMillis" of type Long(milliseconds corresponding to date). TestService.java is used for the same.
import java.io.IOException;
import java.util.Collection;
import java.util.Date;
import java.util.Map;
import java.util.Set;
import org.apache.log4j.Logger;
import com.ladbrokes.dtos.cache.TestDTO;
import com.ladbrokes.dtos.cache.TestIdentifier;
import com.cache.services.CacheService;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
import com.tangosol.util.Filter;
import com.tangosol.util.ValueExtractor;
import com.tangosol.util.extractor.ReflectionExtractor;
import com.tangosol.util.filter.LessFilter;
* @author nkhatw
public class TestService implements CacheService<TestIdentifier, TestDTO, Object> {
private static final String TEST_CACHE = "testcache";
private static final NamedCache CACHE = CacheFactory.getCache(TEST_CACHE);
private static final Logger LOGGER = Logger.getLogger(TestService.class);
* Push DTO objects with a) modTime of java.util.Date type b) timeInMillis of Long type
* @throws IOException
public void init() throws IOException {
for (int i = 0; i < 30; i++) {
final TestDTO dto = new TestDTO();
final Date modTime = new Date();
dto.setModificationTime(modTime);
final Long timeInMillis = Long.valueOf(System.currentTimeMillis());
dto.setTimeinMillis(timeInMillis);
final TestIdentifier testId = new TestIdentifier();
testId.setRecordId(String.valueOf(i));
dto.setId(testId);
final CacheService testService = new TestService();
testService.createOrUpdate(dto, null);
LOGGER.debug("Pushed record in cache with key: " + i + " modTime: " + modTime + " Time in millis: "
+ timeInMillis);
* 1) Fetch Data from cache based on LessFilter with args:
* a) ValueExtractor: extracting time property
* b) java.util.Date value to be compared with
* 2) Verify extracted entryset
* @throws IOException
public void testbyDate(final Date startDate) throws IOException {
final ValueExtractor extractor = new ReflectionExtractor("getModificationTime");
LOGGER.debug("Fetching records from cache with modTime less than: " + startDate);
final Filter lessFilter = new LessFilter(extractor, startDate);
final Set results = CACHE.entrySet(lessFilter);
LOGGER.debug("Fetched Records:" + results.size());
assert results.isEmpty();
* 1) Fetch Data from cache based on LessFilter with args:
* a) ValueExtractor: extracting "time in millis property"
* b) java.Long value to be compared with
* 2) Verify extracted entryset
public void testbyTime(final Long time) throws IOException {
final ValueExtractor extractor = new ReflectionExtractor("getTimeinMillis");
LOGGER.debug("Fetching records from cache with timeinMillis less than: " + time);
final Filter lessFilter = new LessFilter(extractor, time);
final Set results = CACHE.entrySet(lessFilter);
LOGGER.debug("Fetched Records:" + results.size());
assert results.isEmpty();
@Override
public void createOrUpdate(final TestDTO testDTO, final Object arg1) throws IOException {
CACHE.put(testDTO.getId(), testDTO);
@Override
public void createOrUpdate(final Collection<TestDTO> arg0, final Object arg1) throws IOException {
// YTODO Auto-generated method stub
@Override
public <G>G read(final TestIdentifier arg0) throws IOException {
// YTODO Auto-generated method stub
return null;
@Override
public Collection<?> read(final Map<TestIdentifier, Object> arg0) throws IOException {
// YTODO Auto-generated method stub
return null;
@Override
public void remove(final TestDTO arg0) throws IOException {
// YTODO Auto-generated method stub
Use Case execution Results:
"testbyTime" method returns correct results.
However, "testbyDate" method gives random and incorrect results. -
Incorrect result being returned for a formula
I'm getting incorrect result for a simple formula. Changing the value of Rs is not affecting the output correctly. I am new to labview and can use some help. VI attached.
Attachments:
testformula.vi 21 KBI'm getting incorrect result for a simple formula. Changing the value of Rs is not affecting the output correctly. I am new to labview and can use some help. VI attached.
Input values used:
Rp2 = -131.763
Rs2 = 0.321
Isc2= 8.21
Vmp= 26.3
Imp= 7.61
Io2= 9.735E-8
exp2= 8.404E+8
Output being shown as:
Pmax3 = 200.143
It should be Pmax3 = 192.688
Attachments:
testformula.vi 22 KB -
"select count(*)" and "select single *" returns different result
Good day!
product version SAP ECC 6.0
oracle10
data transfers from external oracle db into customer tables using direct oracle db link
sometimes I get case with different results from 2 statements
*mytable has 10 rows
*1st statement
data: cnt type I value 0.
select count( * ) into cnt from mytable WHERE myfield_0 = 123 and myfield_1 = '123'.
*cnt returns 10 - correct
*2nd statement
select single * from mytable WHERE myfield_0 = 123 and myfield_1 = '123'.
*sy-dbcnt returns 0
*sy-subrc returns 4 - incorrect, 10 rows are "invisible"
but
1. se16 shows correct row number
2. I update just one row from "invisible" rows using se16 and 2nd statement returns correct result after that
can not understand why
thank you in advance.Thank you, Vishal
but,
general problem is that
1. both statements have the same WHERE conditions
2. 1st return resultset with data (sy-dbcnt=10), 2nd return empty dataset, but must return 1 in sy-dbcnt
Yes, different meaning, you are right, but must 2nd must return 1, because of "select single *" construction, not 0.
Dataset to process is the same, WHERE conditions are equal...
I think the problem is that how ABAP interperets select count(*) and "select single *".
Maybe "select count (*)" scans only PK from index page(s)? and "select single *" scans data pages? and something is wrong with that?
I'm new in SAP and didn't find any SAP tool to trace dump of data and indexes pages with Native SQL.
se16 shows all records.
And why after simple manual update of just one record using se16 "select single *" returns 1?
I've just marked one row to update, didn't change any data, then pressed "save". -
Power View returns incorrect totals on SSAS MD
I have a very simple Power View report on top of SSAS MD.
I have two measures - both not calculated: an average over time and a last-non-empty.
When I slice the data, the totals are incorrect. For example, if I look at two months - June and July - the average returns a result as if the sum was divided by 1.97 instead of 2 and the last-non-empty returns the amount of december, instead of July. When
I reconstruct the same report in SSRS, the totals are correct.
This looks a lot like this bug:
http://support.microsoft.com/kb/2880094/en-us
The solution in the KB article is to install SQL Server 2012 SP1 CU6, however I already have CU9 installed.
Maybe the patch wasn't succesful? Is there any place I can check? When I run the set-up again I get the notification there is nothing to update.
Any ideas?
MCSE SQL Server 2012 - Please mark posts as answered where appropriate.Maybe the patch wasn't succesful? Is there any place I can check?
Hello,
You can check the version of SSAS in the Windows Registry. Or you can connect to the server by using Object Explorer in SQL Server Management Studio. After Object Explorer is connected, it will show the version information in parentheses.
Reference:
Find the SQL Server Analysis Services and Reporting Services Version, Edition and Service Pack
Microsoft SQL Server 2012 Builds
Regards,
Fanny Liu
If you have any feedback on our support, please click here.
Fanny Liu
TechNet Community Support -
Simple query returns wrong results in Sql 2012
On my Windows 8 box running Sql 2012 11.0.3128, this query returns an IncludeCount of 0 and an ExcludeCount of 1.
On my Windows 7 box running Sql 2008 10.50.2550 this query returns an IncludeCount of 3 and an ExcludeCount of 1, which is correct.
In short, it runs properly on these versions of OS and Sql's:
Windows 2008 R2 + Sql 10.50.2550
Windows 2008 R2 + Sql 10.50.4000
Windows 2012 SP1 + Sql 11.0.3000
Windows 7 + Sql 11.0.2100
And gives incorrect results on these OS's and Sql's (so far, tested):
Windows 8 Enterprise + Sql 11.0.3128
Windows 2008 R2 + Sql 10.50.2550
I wondered if anyone else can reproduce this? I can't figure out the magic combination of OS and SQL version this breaks on.
In all scenarios, the resulting @filters table is populated correctly, and the [Include] column is properly set to a 1 or a 0, so why aren't the other variables being properly set?
If I change the [ID] column to NONCLUSTERED, it works fine, too. It doesn't matter if @filters is a TVP or a temp table or an actual table, same (incorrect) results in each case.
DECLARE @filters TABLE([ID] bigint PRIMARY KEY, [Include] bit)
DECLARE @excludecount int = 0
DECLARE @includecount int = 0
DECLARE @id bigint
INSERT INTO @filters ([ID])
VALUES (1), (3), (4), (-7)
UPDATE @filters SET
@id = [ID],
@includecount = @includecount + (CASE WHEN @id > 0 THEN 1 ELSE 0 END),
@excludecount = @excludecount + (CASE WHEN @id < 0 THEN 1 ELSE 0 END),
[Include] = CASE WHEN @id > 0 THEN 1 ELSE 0 END,
[ID] = ABS(@id)
SELECT @includecount as IncludeCount, @excludecount as ExcludeCount
SELECT * FROM @filtersWhat part is undocumented?
http://technet.microsoft.com/en-us/library/ms177523.aspx
The above link states I can update variables inside an UPDATE statement ...
But it does not say what the correct result of what you are trying to would be. Variable assignment in UPDATE only makes sense if the UPDATE hits one row. If the UPDATE matches several rows, which value you the variable is set to is not defined.
It gets even more complicated when you have the variable on both sides of the expression. But I'd say that the only two values that makes as the final value of @includecount 0. An UPDATE statement, like other DML statements in SQL, is logically defined
as all-at-once. There are no intermediate results. Therefore the only possible values are the initial value of @includecount plus the value of the CASE statement, which always should returns 0, since @id is NULL when the UPDATE statement starts to execute.
I'm afraid that what you have is nonsense from a language perspective, and the result is undefined. Whenever you get different results from a query depending on whether you have certain indexes in place, you know that the query is indeterministic. There
are certainly part of SQL that are indeterministic, for instance ORDER BY on a non-unique columns. But in this particular case you have also wandered out into the land that is also undefined.
I'm not sure what you are trying to achieve, but I can only advice you to go back to the drawing board.
Erland Sommarskog, SQL Server MVP, [email protected] -
Does compare aggregates mode produce incorrect results?
Has anyone encountered a problem with using compare aggregates mode with arrays.
For example, if compare aggregates mode is selected in using the "In Range and Coerce.vi" with the inputs (upperLimit, lowerLimit, and x value) being an array of integers, then compare aggregates doesnot return correct results. I've also noticed this with the greater than and less than comparison vi's.
I've attached a sample which further illustrates the incorrect results.
Attachments:
compareAggregatesTest.vi 9 KB
compareAggregatesTest1.vi 9 KBI talked to some people at NI and here's how I understand it:
Compare aggregates simply does not do what we think it does. It is NOT the same as comparing all elements and then ANDing the results. Instead, it compares the elements in the cluster in order. This is actually identical to ANDing the results when you do an equality comparison, but it's different if you do a less-than or greater-than comparison.
The LabVIEW help provides the example of a phone book, where "Smith, John" is greater than "Smith, Jane" and where "Smith, Jane" is also greater than "Doe, John" because Doe comes before Smith.
This helps to explain the results of my example:
In the first array element, the comparison fails because 10 is the first element in the cluster and it is less than 40.
In the second array element, 40 and 40 are equal, so the decision is moved to the next element (like having two "Smith"s, and since 40 is greater than 30, the comparison returns true.
So again, the order is important!
Try to take over the world!
Attachments:
Compare Aggregates.png 16 KB -
Rownum giving Incorrect Result in 11gR2 but working ok in 10gR2
Hi All,
We have following query which is working fine in 10g but in 11g it is showing incorrect result.
select x.*,rownum from (select rat.rating_agency_id from bus_ca_cpty_rating rat,MST_CP_RATING mst
where rat.org_id=618
and
rat.rating_agency_id=mst.rating_agency_id
and
rat.rating_value=mst.rating_value
and
rat.heritage_system=mst.heritage_system
order by rat.rating_date,rat.rating_time)x
where rownum=1;
Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
RATING_AGENCY_ID ROWNUM
3 1
1 2
Result of the query in 11gR2 (11.2.0.3)
RATING_AGENCY_ID ROWNUM
1 1
Result of the query in 10gR2 (10.2.0.3)
RATING_AGENCY_ID ROWNUM
3 1
Request your help to resolve the issue(please tell me the bug name if it is a bug) and please let me know how it is processing the query in 11g.
Edited by: 906061 on Jun 19, 2012 2:22 AMT.PD wrote:
906061 wrote:
Result Without last Check <where rownum=1> in the query (in both 10g and 11g)
RATING_AGENCY_ID ROWNUM
3 1
1 2
Result of the query in 11gR2 (11.2.0.3)
RATING_AGENCY_ID ROWNUM
1 1
Result of the query in 10gR2 (10.2.0.3)
RATING_AGENCY_ID ROWNUM
3 1Your desired result depends on the wrong idea if implicid ordering of the results. there is no such!
Database does not sort returned rows any how (unless you use order by in your query). The order of returned rows may be consistent over a long period but if the table contents is reorganized or (as I assume) you import data to anotehr database the order may change.
To make the long storry short: you need another filter condition than <tt>rownum = 1</tt>.
bye
TPDLook closely: it looks like a standard top-n query with the order by in the sub-query. -
Incorrect result between maintain master data and bex query, how can i fix?
Hi ALL,
i get some messages from the users there is incorrect result between SAP R/3 and Report on BW. i controlled the monitor and i saw there was a job for 0CUSTOMER_ATTRIBUTE that it finish correctly but the processing it was only in PSA, i started the full update immediately from PSA into Data Targets and is finished correctly. after when i control the content of the 0CUSTOMER (right click maintain master data) i get the correct attribute result that match the data in SAP R/3, but the problem is when i execute a query Bex on this master data it will not return the same attributes data.
Can SomeBody Help please
Bilalhi,
For any master data attributes loaded you will have to run "Attributes Change Run" for that.Execute for Master data 0CUSTOMER.
The same is avilable in rsa1->Tools(top menu)->apply hierarchy/attribute run.
hope it helps,
regards,
Parth.
Maybe you are looking for
-
Photoshop CS5 extended just spins on start-up
The Facts: Version Adobe Photoshop Cs5 the extended version. Mac Os: Snow Leopard version 10.6.8 Processor 2x2.66 GHz Duel Core Intel Xeon Graphics Cards: NVIDIA GeForce 7300 GT The Problem: Starting Yesterday, everytime I tried to open my photosh
-
My URL/address bar is missing after the Delicious update. My search engine drop down menu is missing after the Delicious update. I can't figure out how to turn them back on, please advise, thanks.
-
My Apple TV is not recognized in iTunes
I cannot get AirPlay to work with my AppleTV in iTunes. I recently installed a new Time Machine and then reset my AirPort Express so that it could be used as an extension of my network. The Airport Express is recognized in iTunes, but the Apple TV
-
Specifying Parameters for Fillable Form Elements
I am trying to create a form and have so many questions. I wish I could speak with someone, but I'll try to detail it out here. The form is a certificate. There is a border and various text. Among the text are four fields for filling in the recipient
-
Is it possible to customize like feature of sharepoint list?
Hello, I want only like option in list ratings, if someone clicks on like then unlike link should not be visible. only positive ratings should be there. Is it possible? if yes please help me. Thank You so much.