Use of block sizes with APO pgm /SAPAPO/RDMCPPROCESS

I am using APO V5.1.
I am using standard pgm /SAPAPP/RDMCPPROCESS to send APO planning data back to the connected ECC system.
One of the parameters is block size.
I am using the default of 1000.
But what are the performance implications of using different block sizes? What is 'good practice' for using this block size parameter?
Thanks for any advice on this point...

Hi,
The F1 help itself gives good information on this topic.
The block size indicates how many change pointers are processed in one shot (one block).
So, if you increase the block size, it means that more change pointers are processed in one shot meaning that the memory requirement would be higher. On the other hand, since you have increased the change pointers to be processed in one shot, the number of blocks required would be less. So, if your system has sufficient capacity to process the number of change pointers that you specify in a block, increasing the block size would increase the load on the memory, but would make the process fast, meaning that your change pointer job would finish faster.
Vice versa is also true.
Your system performance (mostly linked to memory availability) could get negatively impacted if you increase the block size. If you see that your system performance goes down with this block size of 1000, this should mostly mean that your memory resources are already exhausted in other processes, and you could reduce the block size to 500. With this, you free up memory and system would perform faster. Though your change pointer job would take a little bit more time.
Try first with having the default block size. Unless you face some issue, there's no point in changing the default value.
Hope this explains.
PS: In our case, we mostly use 1000, but during some peak system load times, we use 500 as block size.
Thanks - Pawan

Similar Messages

  • Using large block sizes for index and table spaces

    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?

    user3390467 wrote:
    " You are not using large blocksizes for your index tablespaces. Oracle research proves that indexes will build flatter tree structures in larger blocksizes.
    Is this a generic statement that I can use for all tables or indexes? This is a generic statement used by some consultants. Unfortunately, it is riddled with exceptions and other considerations.
    One consultant in particular seems to have anecdotal evidence that using different block sizes for index (big) and data (small) can yield almost miraculous improvements. However, that can not be backed up due to NDA. Many of the rest of us can not duplicate the improvements, and indeed some find situations where that results in a degradation (esp with high insert/update rates from separated transactions).
    I also have batch and online activity. My primary target is batch and it should not impact online. Not sure if both have common tables.
    How to find the current block size used for tables and index? is there a v$parameter query?
    What is an optimal block size value for batch?
    How do I know when flatter tree str has been achieved using above changes? Is there a query to determine this?
    What about tables, what is the success criterion for tables. can we use the same flat tree str criterion? Is there a query for this?I'd strongly recommend that you
    1) stop using generic tools to analyze specific problems
    2) define you problem in detail ()what are you really trying to accomplish - seems like performance tuning, but you never really state that)
    3) define the OS and DB version - in detail. Give rev levels and patch levels.
    If you are having a serious performance issue, I strongly recommend you look at some performance tuning specialists like "http://www.method-r.com/", "http://www.miracleas.dk/", "http://www.hotsos.com/", "http://www.pythian.com/", or even Oracle's Performance Tuning consultants. Definitely worth the price of admission.

  • How to install 10g database on windows with db block size 16k

    Hi,
    Can somone help me install oracle 10g database on windows xp with db block size as 16k.
    i need this database, because it is one of the recommendations for insalling OWB(Oracle Warehouse Builder).
    Thanks,
    Philip.

    1)     In Initialization parameter pile
    DB_BLOCK_SIZE=8192/16K - One way
    2) The other way once you install Oracle 10 G then at the time when you are creating database with Database Configuration Assistant – you can modify the block size by the[b] Initialization parameter screen.
    -     Sizing Tab
    Block Size 8K/16K
    *** Remember by default Oracle 10 G uses 8K block size.
    You use the options on the Sizing tab to configure the block size of your database
    and the number of processes that can connect to this database. The Block Size setting corresponds to the smallest unit of storage within the Oracle database. All storage of database objects (tables, indexes, and so on) are governed by the block size. The block size defaults to 8KB, but you can modify it. Once the database is created, you cannot modify this setting.
    The maximum and minimum size of an Oracle block depends on the operating system. Generally, 8KB is sufficient for most transaction-oriented applications, and larger block sizes such as 16KB and higher are used in data warehouse–type applications.
    The Processes setting specifies the maximum number of simultaneous operating system processes that can be connected to this Oracle database. You must include at least six processes for each of the Oracle background processes. You can increase this number on the Initialization parameter screen.

  • Tablespace creation with different block size

    OS - rhel 4.3
    kernel - 2.6.9
    Oracle - 10.2
    Hardware - IBM X series 346.
    Defined block size for the db - 8kb.
    Question: Can I create a tablespace with 16k?
    regards,
    Lily

    You can create a tablespace with 16k blocks if, as Daljit suggests, you create a 16k block buffer cache.
    I would, however, ask why you would want to create such a tablespace? The ability to have tablespaces with different block sizes in a single database was created primarily to allow the use of transportable tablespaces in situations where different databases had different block sizes. If you are trying to create a 16k block size for a different reason, there are a couple of things to be aware of
    - While there are theories out there that tablespaces with different block sizes can have dramatic effects on performance, I've never seen any solid evidence that backs up this theory.
    - Using different block sizes in the same database can significantly increase the complexity of managing a database. You'll have multiple buffer caches that you'll have to size, for example, automatic SGA management doesn't work with non-standard block size buffer caches, etc.
    Justin

  • Any reliable cases where larger block sizes are useful?

    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    I don't have a specific need. I'm just curious. Every time I  look this up I get buried in generic copy and paste blog posts that copy the docs, the generic test cases that were debunked, by the guys above, and other junk. So its hard to look for this.

    Guess2 wrote:
    So I did some googling around to read up on 16kb or larger blocksizes. I found a series of articles by Jonathan Lewis and Richard Foote (plus other DBAs whose posts I trust ) debunking the usage of larger blocksize. I have not been able to find a single article, blog post, forum post, with a legitimate case where a larger block size actually improves performance. Its hard to google this stuff because the good stuff is buried beneath all the trash.
    So have any of the Oak Table people and other guys who write articles where they do quality testing find cases where larger block sizes are useful?
    Lurking in the various things I've written about block sizes there are a couple of comments about using different block sizes (occasionally) for LOBs - though this might be bigger or smaller depending on the sizes of the actual LOBs and the usage pattern: it also means you automatically separate the LOB cache from the main cache, which can be very helpful.
    I've also suggested that for IOTs (index organized tables) where the index entries can be fairly large and you don't want to create an overflow segment you may get some benefit if the larger block size typically allows all rows for a give (partial) key value to reside in just one or two blocks.  The same argument can apply, though with slightly less strength for "fat" indexes (i.e. ones you've added columns to in order to avoid visiting the table for very impoartant time-critical queries).  The drawback in these two cases is that you're second-guessing, and to an extent choking, the LRU algorithms, and you may find that the gain on the specific indexes is obliterated by the loss on the rest of the caching activity.
    Regards
    Jonathan Lewis

  • Raid storage usage and block size

    We have two XServe RAID units Raid 5 and we are adding a new 16 bay ACNC raid with 16 1.5TB drives in Raid 6 + Hot Spare. I initialized the Raid 6 with 128K block size. The total data moving from the older raid volumes is around 5.7TB, but on the new Raid it is taking around 7.4TB of space. Is this due to the 128K block size? This is a prepress server so most of the files are quite large, but there may be lots of small files as well.

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • RAID, ASM, and Block Size

    * This was posted in the "Installation" Thread, but I copied it here to see if I can get more responses, Thank you.*
    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    Hi
    RAID 0 does indeed offer best performance, however if any one drive of the striped set fails you will lose all your data. If you have not considered a backup strategy now would be the time to do so. For redundancy RAID 1 Mirror might be a better option as this will offer a safety net in case of a single drive failure. A RAID is not a backup and you should always consider a workable backup strategy.
    Purchase another 2x1TB drives and you could consider a RAID 10? Two Stripes mirrored.
    Not all your files will be large ones as I'm guessing you'll be using this workstation for the usual mundane matters such as email etc? Selecting a larger block size with small file sizes usually decreases performance. You have to consider all applications and file sizes, in which case the best block size would be 32k.
    My 2p
    Tony

  • Install Recommendations (RAID, ASM, Block Size etc)

    Hello,
    I am about to set up a new Oracle 10.2 Database server. In the past, I used RAID5 since 1) it was a fairly small database 2) there were not alot of writes 3) high availability 4) wasted less space compared to other RAID techniques.
    However, even though our database is still small (around 100GB), we are noticing that when we update our data, the time it takes is starting to grow to a point whereby the update that used to take about an hour, now takes 10-12 hours or more. One thing we noticed that if we created another tablespace which had a block size of 16KB versus our normal tablespace which had a block size of 8KB, we almost cut the update time in half.
    So, we decided that we should really start from scratch on a new server and tune it optimally. Here are some questions I have:
    1) Our server is a DELL PowerEdge 2850 with 4x146GB Hard Drives (584GB total). What is the best way to set up the disks? Should I use RAID 1+0 for everything? Should I use ASM? If I use ASM, how is the RAID configured? Do I use RAID0 for ASM since ASM handles mirroring and striping? How should I setup the directory structure? How about partitioning?
    2) I am installing this on Linux and when I tried on my old system to use 32K block size, it said I could only use 16K due to my OS. Is there a way to use a 32K block size with Linux? Should I use a 32K block size?
    Thanks!

    The way I usually handle databases of that size if you don't feel like migrating to ASM redundancy is to use RAID-10. RAID5 is HORRIBLY slow (your redo logs will hate you) and if your controller is any good, a RAID-10 will be the same speed as a RAID-0 on reads, and almost as fast on writes. Also, when you create your array, make the stripe blocks as close to 1MB as you can. Modern disks can usually cache 1MB pretty easily, and that will speed the performance of your array by a lot.
    I just never got into ASM, not sure why. But I'd say build your array as a RAID-10 (you have the capacity) and you'll notice a huge difference.
    16k block size should be good enough. If you have recordsets that are that large, you might want to consider tweaking your multiblock read count.
    ~Jer

  • What USB storage devices have a block size of 512 bytes?

    After pulling my hair out for weeks trying to get a usb hard drive to work with my new AirPort Extreme (802.11n), I ran across this
    http://docs.info.apple.com/article.html?artnum=305038
    AirPort Extreme (802.11n): USB storage device supported formats and protocols
    You can connect USB-based storage devices to an AirPort Extreme (802.11n). Learn which formats and protocols are supported.
    The AirPort Extreme (802.11n) supports USB storage devices that have a block size of 512 bytes, and are formatted as Mac OS Extended (HFS-plus), FAT16, or FAT32. Not all USB storage devices use a block size of 512 bytes.
    The AirPort Extreme (802.11n) shares storage devices based on the format used to initialize the storage device. For example, if HFS-plus formatting was used, AFP and SMB/CIFS protocols are used to share the device on the network. If FAT16 or FAT32 was used, SMB/CIFS protocols are used.
    The AirPort Extreme (802.11n) works with disks that have a single partition and are not software RAID volumes (no more than one volume per physical disk). If the disk is a self-contained RAID that presents itself to a computer as a single volume requiring no software support, then it is supported.
    Note: Use AirPort Disk Utility to discover and mount AirPort Extreme-based volumes over the network.
    Now, this information is not easily obtainable while
    shopping for a new usb hard drive. How do I find out which
    ones support this 512 byte block size????
    Would have bee nice to know that not all usb hard drives
    are supported by the AirPort Extreme (802.11n) before I
    purchased it.
    Thanks
    J Riley

    Duane posted a link to an unofficial 802.11n Airport Extreme Hard Drive Compatibility List.
    http://www.ifelix.co.uk/tech/8014.html
    Still not enough information to make an informed purchase that
    will work.

  • DASYLAB QUERIES on Sampling Rate and Block Size

    HELP!!!! I have been dwelling on DASYLAB for a few weeks regarding certain problems faced, yet hasn't come to any conclusion. Hope that someone would be able to help.Lots of thanks!
    1. I need to have more data points, thus I increase the sampling rate(SR). When sampling rate is increased, Block size(BS) will increase correspondingly.
    For low sampling rate (SR<100Hz) and Block size of 1, the recorded time in dasy and the real experimental time is the same. But problem starts when SR>100Hz for BS=1. I realized that the recorded time in dasylab differs from the real time. To solve the time difference problem, I've decided to use "AUTO" block size.
    Qn1: Is there any way to solve the time difference problem for high SR?
    Qn2: For Auto Block Size, Is the recorded result in dasylab at one time moment the actual value or has it been overwritten by the value from the previous block when AUTO BS is chosen.
    2. I've tried getting the result for both BS=1 and when BS is auto. Regardless of the sampling rate, the values gotten when BS=1 is always larger than that of Auto Block size. Qn1: Which is the actual result of the test?
    Qn2: Is there any best combination of the block size and sampling rate that can be used?
    Hope someone is able to help me with the above problem.
    Thanks-a-million!!!!!
    Message Edited by JasTan on 03-24-2008 05:37 AM

    Generally, the DASYLab sampling rate to block size ratio should be between 2:1 and 10:1.
    If your sample rate is 1000, the block size should be 500 to no smaller than 100.
    Very large block sizes that encompass more than 1 second worth of data often cause display delays that frustrate users.
    Very small block sizes that have less than 10 ms of data cause DASYLab to bog down.
    Sample rate of 100 samples / second and a block size of 1 is going to cause DASYLab to bog down.
    There are many factors that contribute to performance, or lack there of - the speed and on-board buffers of the data acquisition device, the speed, memory, and video capabilities of the computer, and the complexity of the worksheet. As a result, we cannot be more specific, other than to provide you with the rule of thumb above, and suggest that you experiment with various settings, as you have done.
    Usually the only reason that you want a small block size is for closed loop control applications. My usual advice is that DASYLab control is around 1 to 10 samples/second. Much faster, and delays start to set in. If you need fast, tight control loops, there are better solutions that don't involve Microsoft Windows and DASYLab.
    Q1 - without knowing more about your hardware, I cannot answer the question, but, see above. Keep the block size ratio between 2:1 and 10:1.
    Q2 - without knowing more about your hardware, and the driver, I'm not sure that I can fully answer the question. In general, the DASYLab driver instructs the DAQ device driver to program the DAQ device to a certain sampling rate and buffer size. The DASYLab driver then retrieves the data from the intermediate buffers, and feeds it to the DASYLab A/D Input module. If the intermediate buffers are too small, or the sample rate exceeds the capability of the built-in buffers on the hardwar, then data might be overwritten. You should have receive warning or error messages from the driver.
    Q3 - See above.
    It may be that your hardware driver is not configured correctly. What DAQ device, driver, DASYLab version, and operating system are you using? How much memory do you have? How complex is your worksheet? Are you doing control?
    Have you contacted your DASYLab reseller for more help? They should know your hardware better than I do.
    - cj
    Measurement Computing (MCC) has free technical support. Visit www.mccdaq.com and click on the "Support" tab for all support options, including DASYLab.

  • How to change existing database block size in all tablespaces

    Hi,
    Need Help to change block size for my existing database which is in 8kb of block size.
    I have read that we can only change block size during database creation, but i want to change it after database installation.
    because for some reason i dont want to change the database installation script.
    Can any one list the steps to change database block size for all existing table space (except system, temp ).
    want to change it to 32kb.
    Thank you for you time.
    -Rushang Kansara

    > We are facing more and more physical reads, I thought by using 32K block size
    we would resolve that..
    A physical read reported by Oracle may not be - it could well be a logical read from the o/s file system cache and not a true physical read. With raw devices for example, a physical I/O reported by Oracle is indeed one as there is no o/s cache for raw devices. So one needs to be careful how aone interprets number like physical reads.
    Lots of physical reads may not necessarily be a bad thing. In contrast, a high percentage of "good/fast" logical reads (i.e. a high % buffer cache hit ratio) may indicate a serious problem with application design - as the application is churning through the exact same data again and again and again. Applications should typically only make a single pass through a data set.
    The best way to deal with physical reads is to make them less. Simple example. A database deals with a lot of inserts. Some bright developer decided to over-index a table. Numerous indexes for the same columns exist in difference physical column orders.
    Oracle now spends a lot of time dealing (reading) with these indexes when inserting (or updating a row). A single write I/O may incur a 100 read I/Os as a result of these indexes needing to be maintained.
    The bottom line is that "more and more physical I/O" is merely a symptom of a problem. Trying to speed these up could well be a wasted exercise. Besides, the most optimal approach to "lots of I/O" is to tune it to make less I/O.
    I/O is the most expensive operation for a RDBMS. It is very difficult to make this expense less (i.e. make I/Os faster). It is more effective to make sure that you use this expensive resource in an optimal way.
    Simple example. Single very large table with 4 indexes. Not very efficient design I/O wise. Single very large partitioned table with local indexes. This can reduce I/O on that table by up to 80% in my experience.

  • Can tablespace block size different from the database block size

    I have a 10.2.0.3 database in Unix system.
    I created a database used default block size of 8k. However, the client application requires 16k block size database. Can I work around to create a tablespace that has 16k block size instead of drop the database to recreate the database.
    Thanks a lot!

    As Steven pointed out, you certainly can.
    I would generally question, though, whether you should.
    - Why does the application require 16k block sizes? If this is a custom application, it almost certainly doesn't really require 16k blocks. If this is a packaged application, it probably doesn't really require 16k blocks. If 16k blocks are a requirement for support, I would wager that having the application's objects in 16k block size tablespaces in a database with 8k blocks would not be supported.
    - Mixing block sizes increases the management complexity of your database, potentially substantially. You need to specify a completely separate buffer cache for the 16k blocks, a buffer cache that would not be integrated with Oracle's automatic SGA management functionality. Figuring out how to split up the buffer cache between 8k and 16k blocks tends to be rather hard (particularly if the mix changes over time), which means that DBAs are going to be spending substantially more time managing the SGA in this sort of system than in a vanilla 10.2.0.3 system. And that DBAs will have many more opportunities to set things up incorrectly.
    Justin

  • Larger block size = faster DB?

    hi guys,
    (re Oracle 9.2.0.7)
    it seems that a larger block_size makes the database perform operations faster? Is this correct? if so, why would anyone use 2k block sizes?
    thanks

    Hi Howard,
    it's uncharted territory, especially at the higher blocksizes which seem to be less-well-tested than the smaller ones Yup, Oracle releases junkware all the time, untested poo . . .
    You complain, file a bug report, and wait for years while NOTHING happens . . .
    Tell me Howard, how incompetent does Oracle Corporation have to be not to test something as fundamental as blocksizes?
    I've seen Oracle tech support in-action, it's like watching the Keystone Cops:
    Oracle does not reveal the depth of their quality assurance testing, but many Oracle customers believe that Oracle does complete regression testing on major features of their software, such as blocksize. However, Oracle ACE Director Daniel Morgan, says that “The right size is 8K because that is the only size Oracle tests”, a serious allegation, given that the Oracle documentation, Oracle University and MetaLink all recommend non-standard blocksizes under special circumstances:
    - Large blocks gives more data transfer per I/O call.
    - Larger blocksizes provides less fragmentation (row chaining and row migration) of large objects (LOB, BLOB, CLOB)
    - Indexes like big blocks because index height can be lower and more space exists within the index branch nodes.
    - Moving indexes to a larger blocksize saves disk space. Oracle says "you will conserve about 4% of data storage (4GB on every 100GB) for every large index in your database by moving from a 2KB database block size to an 8KB database block size."
    So, does Oracle really not do testing with non-standard blocksizes? Oracle ACE Director Morgan says that he was quoting Bryn Llewellyn of Oracle Corporation:
    “Which brings us full circle to the statement Brynn made to me and that I have repeated several times in this thread. Oracle only tests 8K blocks.”
    Wow.
    Edited by: jkestely on Sep 1, 2008 2:52 PM

  • How to change block size 32 at installation of oracle

    hey
    i'm going to install aoracle on 64bit windows server 2003. but the problem i'm facing is that i dont know how to change bolck size from 8k to 32k. i usually run setup dan install oracle but during this process it not prompt to change Bolck size. plz help what step i should take in order to successful installtion of oracle 64bit with 32K block size. i tried to find dcoumnet link but every body disscuse the db block size could be change but how i did find anywhere.

    yes you are right but what can i do as per my understanding about the below statement is that i should crerate DSS instead of gernal purpose database plz help what can i do now
    From Bug 5141453, development indicates that DB_BLOCK_SIZE cannot be changed for the General Purpose, Transaction Procession and DataWarehouse templates as the seed database is using 8k block size.
    You should create a custom database to get a different block size.
    References

  • Multiple block size

    Hi
    When I use multiple block size tablespaces.(32K)
    I have to set DB_32K_CACHE_SIZE parameter.
    Assume the size of the buffer cache is 500m.
    If I set DB_32K_CACHE_SIZE to 200M.
    Will there be only 300m available in buffer cache? how is the allocation works?

    The vast majority of databases do not need to deploy multiple blocksizes.
    As noted here it is not for all databases only specific: http://www.dba-oracle.com/t_multiple_blocksizes_summary.htm
    I have some doubts in this issue...When in doubt consult the official documentation and metalink.
    Here is IBM's Oracle documentation on multiple blocksizes: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100883
    While most customers only use the default database block size, it is possible to use up to 5 different database block sizes for different objects within the same database.
    Having multiple database block sizes adds administrative complexity and (if poorly designed and implemented) can have adverse performance consequences. Therefore, using multiple block sizes should only be done after careful planning and performance evaluation.

Maybe you are looking for

  • Thunderbolt to HDMI does not detect TV on Yosemite

    I am having the same problem as a lot of people with Yosemite and and connecting to a TV/external monitor via thunderbolt to HDMI but have found a sort of a solution or at least a bit more info that might lead to someone having a eureka moment. I hav

  • Trying to do an Insert but ADF framework is tryng to do Update

    I have JSF create screen where the user wants to either fill out all the fields on the page and then save them, or select a record from a select list and have all the fields from the selected record copied into the create screens fields. When the use

  • Output query result to a text file in ORACLE

    I Need a procedure for generating output query into text file in Oracle. for Example, A table called Tab1 the columns are c1,c2,c3,c4,c5. I want to store the data in text file in... First row as a column name & then, appear the records in second row

  • How do I create a stroke on a color key?

    Hey there, I've got some video of an athlete behind a greenscreen. I basically just want to see a silhouette of him with a red stroke outlining his body shape. Does anyone know if there's a plugin I can use for this in Final Cut Pro? Thanks, Caleb Th

  • Login information for VFP

    I am developing a program to use CrystalReportViewer Control of CR SDK for VS2010. I have a report get data from Visual FoxPro via OLE DB and it does not need username and password to access. When I ran setLogon for that data source, it asked for use