USB TV Box?

In the sample of DDK, it is use PCI bus to transfer data to the capture
pin’s queue, and use the DPC to copy the synthesis buffer data to the queue.
But how to do this process in an USB device? When I use an USB bus to copy
the data, it will not be exist an interrupt to trigger a DPC. It just has an
OnComplete routine to finish the work. But once the usb begin to transfer
the data, I can not pass the buffer address to the usb driver again, that I
passed it at the first time.

In the sample, it use pin’s process to pass the buffer address every time to
CHardwareSimulation::

ProgramScatterGatherMappings in Hwsim, and fill the data in
FillScatterGatherBuffers.

how can I do this in an usb device?

weilufei wrote:

In the sample of DDK, it is use PCI bus to transfer data to the
capture pin’s queue, and use the DPC to copy the synthesis buffer data
to the queue. But how to do this process in an USB device? When I use
an USB bus to copy the data, it will not be exist an interrupt to
trigger a DPC. It just has an OnComplete routine to finish the work.
But once the usb begin to transfer the data, I can not pass the buffer
address to the usb driver again, that I passed it at the first time.

In the sample, it use pin’s process to pass the buffer address every
time to CHardwareSimulation::

ProgramScatterGatherMappings in Hwsim, and fill the data in
FillScatterGatherBuffers.

how can I do this in an usb device?

Over the past several months, we have had an extensive discussion on
this mailing list describing how to remove the PCI-specific hardware
abstraction from avshws and replace it with a USB hardware abstraction.
It’s not difficult, but it’s also not trivial. You have to think about
what information you HAVE, and what information you NEED, then hook up
the two.

It’s true that you don’t get an interrupt to trigger a DPC with a USB
transfer. However, you do get a completion routine called when your URB
completes, and that’s just as good.

I’m not sure you will want to use the buffers directly from the stream.
It is possible to do so, but you have to think about the timing. You
need to keep up with a real-time data stream from your USB device. To
do that, you need to have multiple URBs queued up, so that there is
always a buffer waiting. USB is a scheduled bus: the host controller
schedules all of the transfers for a frame before the frame begins. If
you don’t have a buffer waiting when a frame gets scheduled, you will
miss that frame.

So, you will probably want to have your own pool of buffers for your own
URBs that you submit and recirculate. Then, in the completion routine,
you can copy the data from there to the leading edge buffer.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

The driver that we are developing foe the USB TV BOX, should capture the
Video ,audio and TS stream at the same time (we need it can play PIP).
Then every capture input pin will have a usb pipe; Video has a ISO pipe,
audio has a ISO pipe, and ts has a bulk pipe.
As you said, we should to supply multiple URBs. So we need to deal with the
audio and ts data as this method.
Whether the usb bus will queue up the three pipe unitary, if we need to
capture the three pipes at the same time.
For example:
Video iso pipe submits two IRP/URBs, then Audio iso pipe submits two
IRP/URBs, next the ts bulk pipe submits multiple IRP/URBs again. If the
device don’t sends the video data to the video’s pipe (as a result, the URBs
submitted by video Pipe will be blocked). But the device only sends the
audio and ts data to their pipes, whether the audio and ts pipe will get the
receive data and then do OnComplete routines?

The usb driver be wrote by C , and the avstream be wrote by c++, so should
we write all the avstream(include the usb driver) codes by C++? Or, we just
need to write the usb driver use C, and write the avstream by C++?
Which way will be more better?


weilufei wrote:

In the sample of DDK, it is use PCI bus to transfer data to the
capture pin’s queue, and use the DPC to copy the synthesis buffer data
to the queue. But how to do this process in an USB device? When I use
an USB bus to copy the data, it will not be exist an interrupt to
trigger a DPC. It just has an OnComplete routine to finish the work.
But once the usb begin to transfer the data, I can not pass the buffer
address to the usb driver again, that I passed it at the first time.

In the sample, it use pin’s process to pass the buffer address every
time to CHardwareSimulation::

ProgramScatterGatherMappings in Hwsim, and fill the data in
FillScatterGatherBuffers.

how can I do this in an usb device?

Over the past several months, we have had an extensive discussion on
this mailing list describing how to remove the PCI-specific hardware
abstraction from avshws and replace it with a USB hardware abstraction.
It’s not difficult, but it’s also not trivial. You have to think about
what information you HAVE, and what information you NEED, then hook up
the two.

It’s true that you don’t get an interrupt to trigger a DPC with a USB
transfer. However, you do get a completion routine called when your URB
completes, and that’s just as good.

I’m not sure you will want to use the buffers directly from the stream.
It is possible to do so, but you have to think about the timing. You
need to keep up with a real-time data stream from your USB device. To
do that, you need to have multiple URBs queued up, so that there is
always a buffer waiting. USB is a scheduled bus: the host controller
schedules all of the transfers for a frame before the frame begins. If
you don’t have a buffer waiting when a frame gets scheduled, you will
miss that frame.

So, you will probably want to have your own pool of buffers for your own
URBs that you submit and recirculate. Then, in the completion routine,
you can copy the data from there to the leading edge buffer.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

weilufei wrote:

The driver that we are developing foe the USB TV BOX, should capture the
Video ,audio and TS stream at the same time (we need it can play PIP).
Then every capture input pin will have a usb pipe; Video has a ISO pipe,
audio has a ISO pipe, and ts has a bulk pipe.

I don’t quite understand that. A transport stream is just a wrapper for
video and audio streams. That is, the transport stream INCLUDES the
video and audio. Why do you have separate pipes for video and audio?
Are you uncompressing the transport stream? Why would you do that?

As you said, we should to supply multiple URBs. So we need to deal with the
audio and ts data as this method.
Whether the usb bus will queue up the three pipe unitary, if we need to
capture the three pipes at the same time.

Each pipe is handled separately. You queue up URBs separately, and the
software will keep track. Of course, all three pipes will be competing
for the same bandwidth.

For example:
Video iso pipe submits two IRP/URBs, then Audio iso pipe submits two
IRP/URBs, next the ts bulk pipe submits multiple IRP/URBs again. If the
device don’t sends the video data to the video’s pipe (as a result, the URBs
submitted by video Pipe will be blocked). But the device only sends the
audio and ts data to their pipes, whether the audio and ts pipe will get the
receive data and then do OnComplete routines?

The pipes are separate. Even if the video pipe is blocked, the other
pipes will still run.

The usb driver be wrote by C , and the avstream be wrote by c++, so should
we write all the avstream(include the usb driver) codes by C++? Or, we just
need to write the usb driver use C, and write the avstream by C++?
Which way will be more better?

“Better” is a difficult thing to define. AVStream drivers are
traditionally written in C++. If it were me, I would pull out the
interesting parts of your USB driver and integrate them into my C++
AVStream driver as a “hardware abstraction” class. However, it is quite
possible to have your C++ AVStream handlers call into C code.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


I don’t quite understand that. A transport stream is just a wrapper for
video and audio streams. That is, the transport stream INCLUDES the video
and audio. Why do you have separate pipes for video and audio?
Are you uncompressing the transport stream? Why would you do that?

I think the OP said that Picture-in-Picture (PIP) support was desired. I
took that to mean that the device supplies two independent streams of
video/audio. Perhaps the ‘second’ stream is available as a TS. This would
not be an outlandish result for a dual-tuner ATSC/Analog setup where the
ATSC stream is MPEG/TS and the Analog is, well, whatever.

Perhaps I got that wrong, though.
-dave

As David said is right, but that is not the key point at this question.
Now I know that each pipe is handled separately. That is Okay!
Thanks very much!

In the europa example, I find this code snippet in CAnlgVideoCap::Start
function
////////////////////////
do //while system buffers available
{
ntStatus = CAnlgVideoCap::Process();
dwIndex++;
}
while( ntStatus == STATUS_SUCCESS );
///////////////////////
I don’t really know what’s this mean?
Is that mean to make 5 clone pointer?
I tried this method in the avshw sample. I done this work as the europa do,
write this code snippet in the CCapturePin::SetState function when it goto
the KSSTATE_RUN state. But at this time, the system buffer is not read, so I
will do this failing. I can not get the leading edge stream pointer! So I am
puzzled how the europa example do it?
I think I have a little of doubt about the stream pointer.
If I specify 5 frames in the queue, like this :
Queue
frame4
frame3
frame2
frame1
frame0
where the leading edge will point to in the queue at first? Is it point to
the index 0 ? or it will point to index 4? Because when i advance the
leading edge
pointer with KsStreamPointerAdvance, it will return STATUS_DEVICE_NOT_READY
every times, so I guess the leading edge is point to the frame4, that the
leading edge references the last frame in the queue. So when I advance it,
it will always
return STATUS_DEVICE_NOT_READY.
But if it is not as I understand, the leading edge is point to the frame0
first,
When I advance it, should it return STATUS_SUCCESS ?
How can I do to insert buffer as the europa example do?


weilufei wrote:

The driver that we are developing foe the USB TV BOX, should capture the
Video ,audio and TS stream at the same time (we need it can play PIP).
Then every capture input pin will have a usb pipe; Video has a ISO pipe,
audio has a ISO pipe, and ts has a bulk pipe.

I don’t quite understand that. A transport stream is just a wrapper for
video and audio streams. That is, the transport stream INCLUDES the
video and audio. Why do you have separate pipes for video and audio?
Are you uncompressing the transport stream? Why would you do that?

As you said, we should to supply multiple URBs. So we need to deal with
the
audio and ts data as this method.
Whether the usb bus will queue up the three pipe unitary, if we need to
capture the three pipes at the same time.

Each pipe is handled separately. You queue up URBs separately, and the
software will keep track. Of course, all three pipes will be competing
for the same bandwidth.

For example:
Video iso pipe submits two IRP/URBs, then Audio iso pipe submits two
IRP/URBs, next the ts bulk pipe submits multiple IRP/URBs again. If the
device don’t sends the video data to the video’s pipe (as a result, the
URBs
submitted by video Pipe will be blocked). But the device only sends the
audio and ts data to their pipes, whether the audio and ts pipe will get
the
receive data and then do OnComplete routines?

The pipes are separate. Even if the video pipe is blocked, the other
pipes will still run.

The usb driver be wrote by C , and the avstream be wrote by c++, so should
we write all the avstream(include the usb driver) codes by C++? Or, we
just
need to write the usb driver use C, and write the avstream by C++?
Which way will be more better?

“Better” is a difficult thing to define. AVStream drivers are
traditionally written in C++. If it were me, I would pull out the
interesting parts of your USB driver and integrate them into my C++
AVStream driver as a “hardware abstraction” class. However, it is quite
possible to have your C++ AVStream handlers call into C code.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

weilufei wrote:

In the europa example, I find this code snippet in CAnlgVideoCap::Start
function
////////////////////////
do //while system buffers available
{
ntStatus = CAnlgVideoCap::Process();
dwIndex++;
}
while( ntStatus == STATUS_SUCCESS );
///////////////////////
I don’t really know what’s this mean?
Is that mean to make 5 clone pointer?

Yes, sort of. That code is just trying to make clones of all of the
buffers that are queued up for the pin. That’s only useful at startup
time, because once you get rolling, Process will be called every time a
new buffer appears.

I tried this method in the avshw sample. I done this work as the europa do,
write this code snippet in the CCapturePin::SetState function when it goto
the KSSTATE_RUN state. But at this time, the system buffer is not read, so I
will do this failing. I can not get the leading edge stream pointer! So I am
puzzled how the europa example do it?

You should be able to get a leading edge pointer as soon as you
transition to KSSTATE_PAUSE. What error do you get?

I think I have a little of doubt about the stream pointer.
If I specify 5 frames in the queue, like this :
Queue
frame4
frame3
frame2
frame1
frame0
where the leading edge will point to in the queue at first? Is it point to
the index 0 ? or it will point to index 4?

The leading edge points to the oldest buffer that has not yet been
advanced. If frame0 is the oldest one, then
KsPinGetLeadingEdgeStreamPointer will get frame0. After you call
KsStreamPointerAdvance on frame0, then the leading edge is frame1.

Because when i advance the leading edge
pointer with KsStreamPointerAdvance, it will return STATUS_DEVICE_NOT_READY
every times,

That sounds like the graph hasn’t really transitioned to a RUN state yet.

How can I do to insert buffer as the europa example do?

Why do you think you need to? One possibility is to add code to the
Process callback that calls KsPinAttemptProcessing if it was able to get
a buffer. That will cause Process to be called again.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

In my program , when it first into Process(), I find the function
KsStreamPointerAdvance some times will return STATUS_DEVICE_NOT_READY.
But some times when I restart it again, it will return STATUS_SUCCESS,
/////
For example it will give me the follow data:
Leading ->StreamHeader->Data //the queue size are 4
0xf6eea800 frame0
0xf6e1f000 frame1
0xf6d54800 frame2
0xf6c89000 frame3
Condition1:
Some time when I start the program first, the Leading ->StreamHeader->Data
is 0xf6eea800, after I advance it ,
Leading ->StreamHeader->Data will get the value of 0xf6e1f000 at once.
And the function KsStreamPointerAdvance return STATUS_SUCCESS.
Condition2:
Some time when I start the program first, Leading ->StreamHeader->Data
Will get be the value of 0xf6eea800, but when I advance it, the Leading will
Be NULL, and the function KsStreamPointerAdvance return
STATUS_DEVICE_NOT_READY.
I don’t know why that happen? What’s the condition that I can confirm at
Condition1?

When I used multiple IRP/URBs, I need multiple Clone stream pointer, that I
will fill the data into the pin’s buffer in the OnComplete routine of USB
driver. When in the first URBs OnComplete I will copy the data to the first
clone stream pointer, and in the second URBs OnComplete I will copy the data
to the next clone stream pointer.Is that a feasible method?
If that is feasible method, what’s the time I should to delete the cloned
stream pointer that have been filled?


weilufei wrote:

In the europa example, I find this code snippet in CAnlgVideoCap::Start
function
////////////////////////
do //while system buffers available
{
ntStatus = CAnlgVideoCap::Process();
dwIndex++;
}
while( ntStatus == STATUS_SUCCESS );
///////////////////////
I don’t really know what’s this mean?
Is that mean to make 5 clone pointer?

Yes, sort of. That code is just trying to make clones of all of the
buffers that are queued up for the pin. That’s only useful at startup
time, because once you get rolling, Process will be called every time a
new buffer appears.

I tried this method in the avshw sample. I done this work as the europa
do,
write this code snippet in the CCapturePin::SetState function when it goto
the KSSTATE_RUN state. But at this time, the system buffer is not read, so
I
will do this failing. I can not get the leading edge stream pointer! So I
am
puzzled how the europa example do it?

You should be able to get a leading edge pointer as soon as you
transition to KSSTATE_PAUSE. What error do you get?

I think I have a little of doubt about the stream pointer.
If I specify 5 frames in the queue, like this :
Queue
frame4
frame3
frame2
frame1
frame0
where the leading edge will point to in the queue at first? Is it point to
the index 0 ? or it will point to index 4?

The leading edge points to the oldest buffer that has not yet been
advanced. If frame0 is the oldest one, then
KsPinGetLeadingEdgeStreamPointer will get frame0. After you call
KsStreamPointerAdvance on frame0, then the leading edge is frame1.

Because when i advance the leading edge
pointer with KsStreamPointerAdvance, it will return
STATUS_DEVICE_NOT_READY
every times,

That sounds like the graph hasn’t really transitioned to a RUN state yet.

How can I do to insert buffer as the europa example do?

Why do you think you need to? One possibility is to add code to the
Process callback that calls KsPinAttemptProcessing if it was able to get
a buffer. That will cause Process to be called again.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

weilufei wrote:

In my program , when it first into Process(), I find the function
KsStreamPointerAdvance some times will return STATUS_DEVICE_NOT_READY.
But some times when I restart it again, it will return STATUS_SUCCESS,
/////
For example it will give me the follow data:
Leading ->StreamHeader->Data //the queue size are 4
0xf6eea800 frame0
0xf6e1f000 frame1
0xf6d54800 frame2
0xf6c89000 frame3
Condition1:
Some time when I start the program first, the Leading ->StreamHeader->Data
is 0xf6eea800, after I advance it ,
Leading ->StreamHeader->Data will get the value of 0xf6e1f000 at once.
And the function KsStreamPointerAdvance return STATUS_SUCCESS.
Condition2:
Some time when I start the program first, Leading ->StreamHeader->Data
Will get be the value of 0xf6eea800, but when I advance it, the Leading will
Be NULL, and the function KsStreamPointerAdvance return
STATUS_DEVICE_NOT_READY.
I don’t know why that happen? What’s the condition that I can confirm at
Condition1?

I don’t know. Are you saying you get different results on the exact
same computer? It could be a timing thing; perhaps you are just getting
rolling before all of the buffers have been queued up. Have you
compared your debug output line by line to make sure you aren’t
returning a failure code somewhere along the way?

When I used multiple IRP/URBs, I need multiple Clone stream pointer, that I
will fill the data into the pin’s buffer in the OnComplete routine of USB
driver. When in the first URBs OnComplete I will copy the data to the first
clone stream pointer, and in the second URBs OnComplete I will copy the data
to the next clone stream pointer.Is that a feasible method?

Probably not, because it is unlikely that a single URB is large enough
to gather an entire frame. You can’t advance the stream pointer until
you have an entire frame, unless this is a transport stream that works
in smaller packets.

If that is feasible method, what’s the time I should to delete the cloned
stream pointer that have been filled?

You should delete the clone (and move to the next one) when the frame is
complete.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I have pursued the codes. I find that when it return from SetState in
KSSTATE_RUN state. The process() will be called at once, and the process
will be called 4
times circularly to queue up the buffer. My problem just happened in this
process.
Some time, it can queue up the buffer Okay(Condition1) . But some time it
can not do it okay(Condition2). However it can output the data very well
when rolling the data. I can watch the video pictures in window.

As I mentioned in last time, the europa example to call
CAnlgVideoCap::Start(include the code snippet) and queue up the buffer in
the KSSTATE_RUN. But at this time, the program has not return from the
SetState (in KSSTATE_RUN), how it can get the leading edge? I tried it, and
I get the Leading edge is NULL. I only get the effective leading edge after
program return from SetState and call process.

How AVStream to call the Process callback? In the DDK doc said that the
AVStream will call the process in the frequency of AvgTimePerFrame. So
If I specify 25p/s, the process should be called every 40ms. But I find
the process is called not follow this rule in my program.
Time Message
8.078 Info: In CCapturePin::Process=44
8.125 Info: In CCapturePin::Process=45 // interval time
is 47ms
8.188 Info: In CCapturePin::Process=46 // interval time
is 63ms
The interval time is 47ms between the 44th and 45th process be called.
But the interval time is 63ms between the 45th and 46th process be called.
So I don’t know how it control the process called frequency?

weilufei wrote:

In my program , when it first into Process(), I find the function
KsStreamPointerAdvance some times will return STATUS_DEVICE_NOT_READY.
But some times when I restart it again, it will return STATUS_SUCCESS,
/////
For example it will give me the follow data:
Leading ->StreamHeader->Data //the queue size are 4
0xf6eea800 frame0
0xf6e1f000 frame1
0xf6d54800 frame2
0xf6c89000 frame3
Condition1:
Some time when I start the program first, the Leading ->StreamHeader->Data
is 0xf6eea800, after I advance it ,
Leading ->StreamHeader->Data will get the value of 0xf6e1f000 at once.
And the function KsStreamPointerAdvance return STATUS_SUCCESS.
Condition2:
Some time when I start the program first, Leading ->StreamHeader->Data
Will get be the value of 0xf6eea800, but when I advance it, the Leading
will
Be NULL, and the function KsStreamPointerAdvance return
STATUS_DEVICE_NOT_READY.
I don’t know why that happen? What’s the condition that I can confirm at
Condition1?

I don’t know. Are you saying you get different results on the exact
same computer? It could be a timing thing; perhaps you are just getting
rolling before all of the buffers have been queued up. Have you
compared your debug output line by line to make sure you aren’t
returning a failure code somewhere along the way?

When I used multiple IRP/URBs, I need multiple Clone stream pointer, that
I
will fill the data into the pin’s buffer in the OnComplete routine of USB
driver. When in the first URBs OnComplete I will copy the data to the
first
clone stream pointer, and in the second URBs OnComplete I will copy the
data
to the next clone stream pointer.Is that a feasible method?

Probably not, because it is unlikely that a single URB is large enough
to gather an entire frame. You can’t advance the stream pointer until
you have an entire frame, unless this is a transport stream that works
in smaller packets.

If that is feasible method, what’s the time I should to delete the cloned
stream pointer that have been filled?

You should delete the clone (and move to the next one) when the frame is
complete.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

weilufei wrote:

I have pursued the codes. I find that when it return from SetState in
KSSTATE_RUN state. The process() will be called at once, and the process
will be called 4
times circularly to queue up the buffer. My problem just happened in this
process.
Some time, it can queue up the buffer Okay(Condition1) . But some time it
can not do it okay(Condition2). However it can output the data very well
when rolling the data. I can watch the video pictures in window.

As I said, this is probably just a timing issue. Sometimes, your
SetState handler gets called before the user-mode graph has queued up
all of the buffers that will be needed.

As I mentioned in last time, the europa example to call
CAnlgVideoCap::Start(include the code snippet) and queue up the buffer in
the KSSTATE_RUN. But at this time, the program has not return from the
SetState (in KSSTATE_RUN), how it can get the leading edge? I tried it, and
I get the Leading edge is NULL. I only get the effective leading edge after
program return from SetState and call process.

Well, then, I guess you’ll have to do it that way.

How AVStream to call the Process callback? In the DDK doc said that the
AVStream will call the process in the frequency of AvgTimePerFrame.

On average, yes, it does. That doesn’t mean it will get called EXACTLY
on the frame time. You might get three in a row, and then have a long
gap, and then a couple more, etc.

So
If I specify 25p/s, the process should be called every 40ms. But I find
the process is called not follow this rule in my program.
Time Message
8.078 Info: In CCapturePin::Process=44
8.125 Info: In CCapturePin::Process=45 // interval time
is 47ms
8.188 Info: In CCapturePin::Process=46 // interval time
is 63ms
The interval time is 47ms between the 44th and 45th process be called.
But the interval time is 63ms between the 45th and 46th process be called.
So I don’t know how it control the process called frequency?

The “Process” callback is not used to control the frequency. The SOLE
PURPOSE of the Process callback is to hand you empty buffers, and give
you an opportunity to fill them. It is not involved with timing in any
way. You fill the buffers you are given, or arrange for them to be
filled later. When you fill them, you assign them a timestamp. Someone
filter farther up the graph (ususally the renderer) will manage the timing.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

In the avshw sample, it has a timer to control the output frequency, so in
the CompleteMappings will to assign the timestamp for every frame data. But
in my usb box, the output rate can not be controlled, so how can I do to
control the
Output frame rate ?


weilufei wrote:

I have pursued the codes. I find that when it return from SetState in
KSSTATE_RUN state. The process() will be called at once, and the process
will be called 4
times circularly to queue up the buffer. My problem just happened in this
process.
Some time, it can queue up the buffer Okay(Condition1) . But some time it
can not do it okay(Condition2). However it can output the data very well
when rolling the data. I can watch the video pictures in window.

As I said, this is probably just a timing issue. Sometimes, your
SetState handler gets called before the user-mode graph has queued up
all of the buffers that will be needed.

As I mentioned in last time, the europa example to call
CAnlgVideoCap::Start(include the code snippet) and queue up the buffer in
the KSSTATE_RUN. But at this time, the program has not return from the
SetState (in KSSTATE_RUN), how it can get the leading edge? I tried it,
and
I get the Leading edge is NULL. I only get the effective leading edge
after
program return from SetState and call process.

Well, then, I guess you’ll have to do it that way.

How AVStream to call the Process callback? In the DDK doc said that the
AVStream will call the process in the frequency of AvgTimePerFrame.

On average, yes, it does. That doesn’t mean it will get called EXACTLY
on the frame time. You might get three in a row, and then have a long
gap, and then a couple more, etc.

So
If I specify 25p/s, the process should be called every 40ms. But I find
the process is called not follow this rule in my program.
Time Message
8.078 Info: In CCapturePin::Process=44
8.125 Info: In CCapturePin::Process=45 //
interval time
is 47ms
8.188 Info: In CCapturePin::Process=46 //
interval time
is 63ms
The interval time is 47ms between the 44th and 45th process be called.
But the interval time is 63ms between the 45th and 46th process be called.
So I don’t know how it control the process called frequency?

The “Process” callback is not used to control the frequency. The SOLE
PURPOSE of the Process callback is to hand you empty buffers, and give
you an opportunity to fill them. It is not involved with timing in any
way. You fill the buffers you are given, or arrange for them to be
filled later. When you fill them, you assign them a timestamp. Someone
filter farther up the graph (ususally the renderer) will manage the timing.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

I think that you can get the timestamps to stamp your frames from the USB
stack.

Are you using isoch or bulk pipe?


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

“weilufei” wrote in message news:xxxxx@ntdev…
> In the avshw sample, it has a timer to control the output frequency, so in
> the CompleteMappings will to assign the timestamp for every frame data. But
> in my usb box, the output rate can not be controlled, so how can I do to
> control the
> Output frame rate ?
>
> --------------------------------------------------------
> weilufei wrote:
> > I have pursued the codes. I find that when it return from SetState in
> > KSSTATE_RUN state. The process() will be called at once, and the process
> > will be called 4
> > times circularly to queue up the buffer. My problem just happened in this
> > process.
> > Some time, it can queue up the buffer Okay(Condition1) . But some time it
> > can not do it okay(Condition2). However it can output the data very well
> > when rolling the data. I can watch the video pictures in window.
> >
>
> As I said, this is probably just a timing issue. Sometimes, your
> SetState handler gets called before the user-mode graph has queued up
> all of the buffers that will be needed.
>
> > As I mentioned in last time, the europa example to call
> > CAnlgVideoCap::Start(include the code snippet) and queue up the buffer in
> > the KSSTATE_RUN. But at this time, the program has not return from the
> > SetState (in KSSTATE_RUN), how it can get the leading edge? I tried it,
> and
> > I get the Leading edge is NULL. I only get the effective leading edge
> after
> > program return from SetState and call process.
> >
>
> Well, then, I guess you’ll have to do it that way.
>
> > How AVStream to call the Process callback? In the DDK doc said that the
> > AVStream will call the process in the frequency of AvgTimePerFrame.
>
> On average, yes, it does. That doesn’t mean it will get called EXACTLY
> on the frame time. You might get three in a row, and then have a long
> gap, and then a couple more, etc.
>
> > So
> > If I specify 25p/s, the process should be called every 40ms. But I find
> > the process is called not follow this rule in my program.
> > Time Message
> > 8.078 Info: In CCapturePin::Process=44
> > 8.125 Info: In CCapturePin::Process=45 //
> interval time
> > is 47ms
> > 8.188 Info: In CCapturePin::Process=46 //
> interval time
> > is 63ms
> > The interval time is 47ms between the 44th and 45th process be called.
> > But the interval time is 63ms between the 45th and 46th process be called.
> > So I don’t know how it control the process called frequency?
> >
>
> The “Process” callback is not used to control the frequency. The SOLE
> PURPOSE of the Process callback is to hand you empty buffers, and give
> you an opportunity to fill them. It is not involved with timing in any
> way. You fill the buffers you are given, or arrange for them to be
> filled later. When you fill them, you assign them a timestamp. Someone
> filter farther up the graph (ususally the renderer) will manage the timing.
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
>

weilufei wrote:

In the avshw sample, it has a timer to control the output frequency, so in
the CompleteMappings will to assign the timestamp for every frame data. But
in my usb box, the output rate can not be controlled, so how can I do to
control the Output frame rate ?

Of course it is controlled! Your USB box will be delivering television
frames, right? The frame rate will be strictly controlled by the
television signal coming in. That is a much better source of timing
than the fake one used in the sample.

The driver should not control the frame rate. The frame rate should be
controlled by the device.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Maxim S. Shatskih wrote:

I think that you can get the timestamps to stamp your frames from the USB
stack.

Actually, an AVStream driver should always timestamp its frames by
fetching the current stream time from the master clock that is assigned
to the filter by the graph. It shouldn’t try to generate its own
timestamps.

This advice is counterintuitive, because it has always seemed to me that
a driver should generate an “idealized” clock, based on the steady frame
rate from the camera, but time and time again it has been demonstrated
to me that such a strategy produces inferior results in real-world
DirectShow graphs. The best results are achieved by using the current
stream time at the point when the frame is ready to go.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

The chip is developed by ourselives. So the format of the packed data is
defined by ourselives.
The data include 2 TS and a video and an audio data, we use bulk pipe for
Ts,
one iso pipe for video and one iso pipe for audio.

Our device just sends some packed data to the usb, and the usb sends the
data to the capture. All that just a prcess to transfer the data. All the
frames data have be packed to continual data. So I don’t know how to control
it ?

Can you tell me how the other device to define the data format(analog video
and audio data)?


weilufei wrote:

In the avshw sample, it has a timer to control the output frequency, so in
the CompleteMappings will to assign the timestamp for every frame data.
But
in my usb box, the output rate can not be controlled, so how can I do to
control the Output frame rate ?

Of course it is controlled! Your USB box will be delivering television
frames, right? The frame rate will be strictly controlled by the
television signal coming in. That is a much better source of timing
than the fake one used in the sample.

The driver should not control the frame rate. The frame rate should be
controlled by the device.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

weilufei wrote:

The chip is developed by ourselives. So the format of the packed data is
defined by ourselives.
The data include 2 TS and a video and an audio data, we use bulk pipe for
Ts, one iso pipe for video and one iso pipe for audio.

Our device just sends some packed data to the usb, and the usb sends the
data to the capture. All that just a prcess to transfer the data. All the
frames data have be packed to continual data. So I don’t know how to control
it ?

But where does the video and audio data come from? If it is coming from
an antenna, or from a satellite, or from cable, then the television
frames are being generated one frame at a time, in the proper timing.
You can’t push through a thousand frames at once, because the original
source signal is only sending them to you 25 times per second. It is a
real-time process. The timing is enforced at the signal source.

The only way you can get frames faster than real time is if your source
is, for example, reading a movie file from a hard disk. If that’s the
case, then perhaps your device should be delaying the frames so that
they are delivered at the proper time.

Transport streams have time stamps embedded in the data streams. You
don’t have to stamp the packets; the MPEG demux will do that based on
the embedded time.

Can you tell me how the other device to define the data format(analog video
and audio data)?

For NTSC and PAL television, the format of analog video and audio
signals is well regulated by international standards. The television
signal comes in at a 27 MHz rate, and at that rate, you simply cannot
generate TV frames faster than real time.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I think I need to read more things! -_-
Thanks very much, Tim!


weilufei wrote:

The chip is developed by ourselives. So the format of the packed data is
defined by ourselives.
The data include 2 TS and a video and an audio data, we use bulk pipe for
Ts, one iso pipe for video and one iso pipe for audio.

Our device just sends some packed data to the usb, and the usb sends the
data to the capture. All that just a prcess to transfer the data. All the
frames data have be packed to continual data. So I don’t know how to
control
it ?

But where does the video and audio data come from? If it is coming from
an antenna, or from a satellite, or from cable, then the television
frames are being generated one frame at a time, in the proper timing.
You can’t push through a thousand frames at once, because the original
source signal is only sending them to you 25 times per second. It is a
real-time process. The timing is enforced at the signal source.

The only way you can get frames faster than real time is if your source
is, for example, reading a movie file from a hard disk. If that’s the
case, then perhaps your device should be delaying the frames so that
they are delivered at the proper time.

Transport streams have time stamps embedded in the data streams. You
don’t have to stamp the packets; the MPEG demux will do that based on
the embedded time.

Can you tell me how the other device to define the data format(analog
video
and audio data)?

For NTSC and PAL television, the format of analog video and audio
signals is well regulated by international standards. The television
signal comes in at a 27 MHz rate, and at that rate, you simply cannot
generate TV frames faster than real time.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> Actually, an AVStream driver should always timestamp its frames by

fetching the current stream time from the master clock that is assigned
to the filter by the graph. It shouldn’t try to generate its own
timestamps.

Sorry Tim, but I would disagree.

Imagine an USB device which is a stupid source of AVI file.

To play this AVI correctly, the frames should be timestamped according to AVI
file (by incrementing the next frame’s media time by
AviHeader->MillisecondsPerFrame on each frame) and not to the master clock,
isn’t it so? it is so for any AVI file, regardless of its source.

For me, the whole DirectShow’s concept of master clock is mainly to synchronize
video renderer with audio renderer. Usually, audio renderer is a master clock
source, and video renderer is master clock consumer. So, master clock is mainly
used by video renderer to adjust its display rate to the soundtrack.

Yes, if the stream source is realtime, then usually the source is the master
clock provider and also the timestamper. In this case, both audio and video
renderers will make their best to synchronize with the realtime source (by
tweaking the soundcard’s clock if supported, by tweaking the USB HC’s frame
clock if the audio destination is an USB device, and such).

Another consideration.

Imagine the user-mode DirectShow graph which plays an AVI file. In this
situation, each frame’s media time will be stamped by the AVI splitter, by
incrementing by MillisecondsPerFrame.

The source stamps the media time in the packets. The master clock is used not
in stamping, but in interpreting the stamps. The stamps are defined as
“media time”, and master clock answers the question “what is the implementation
of this media time? what clock is used to measure the media time?”. Master
clock is not about “what are the media time values for the packets?”.

Now let’s consider the source which uses USB isoch pipe. For me, it looks
logical to timestamp each packet (DirectShow’s “media sample”) according to the
ideal frame rate taken from the device setup or protocol definition (like
25fps or so), and also provide the master clock by taking it from the USB stack
below (the USB HC’s frame clock).

In this case, if the destination is also on the same USB bus (USB headphones),
then it will be able to notice (I don’t know the exact details of DShow’s
Default DirectSound Device, possibly it is smart enough for such a thing) that
the graph’s master clock is taken from the same USB stack and same USB HC chip,
and will have a chance to not do any clock tweaking, SRC or such.

Even if it will do SRC - then it will notice that both clocks - graph’s master
and its “natural” clock - are always the same (since they are taken from the
same source), so SRC will be a no-op.

And, if the destination is some other sound output device, then it will have
the chance to adjust its clock to be the slave of the graph’s master (the USB
HC clock of the USB bus attaching the source device), or, if the hardware does
not support the clock tweaking, then SRC will be used.

So, for me, it looks 100% valid to set the timestamps in the frames (“media
samples”) in the USB source driver according to the idealized frame rate
taken from the protocol definition (like 25fps or so - exactly 40ms per
sample).

Also, the source can be smart enough to notice if DShow have chosen some other
clock as master, not it’s provided clock. In this case, the USB source can use
USB IOCTLs to tweak the USB HC’s frame clock speed to match the external
source, which also means no SRC and absolute synchronization.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

Maxim S. Shatskih wrote:

> Actually, an AVStream driver should always timestamp its frames by
> fetching the current stream time from the master clock that is assigned
> to the filter by the graph. It shouldn’t try to generate its own
> timestamps.
>

Sorry Tim, but I would disagree.

You’re entitled to disagree, but you would be wrong. It’s just a fact.
I told you it was counterintuitive. With an ideal clock, you get
stuttering previews and strange gaps in captured files. The capture
filter really need to put in graph time, and later filters fix things up.

You can search the newsgroups about this. The words come from Geraint
Davies, one of the original designers of DirectShow.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.