Community Profile

photo

Anh Tran

MathWorks

Last seen: 거의 2년 전 2017년부터 활동

Control Design Automation

통계

  • Knowledgeable Level 3
  • Revival Level 2
  • 3 Month Streak
  • First Review
  • Knowledgeable Level 2
  • First Answer

배지 보기

Content Feed

보기 기준

답변 있음
How to pretrain a stochastic actor network for PPO training?
Hi Jan, You can pretrain a stochastic actor with Deep Learning Toolbox's trainNetwork with some additional work. Emmanouil gave...

거의 3년 전 | 1

| 수락됨

답변 있음
On updating the policy with sim functions and Custom Loop
The approach looks OK, however there is an issue. You must update the agent's actor and critic after each learning iteration. So...

3년 초과 전 | 0

답변 있음
Splitting the input layer of deep neural network (used for the actor of a DDPG agent)
You can define 2 observation specifications on the environment. Thus, the agent will receive splitted input to begin with. Moreo...

3년 초과 전 | 0

답변 있음
"Unable to evaluate the loss function. Check the loss function and ensure it runs successfully": `gradient` can't access the custom loss function
In the training loop, you collect the actor from agent.brain, which is an rlPGAgent. The actor, thus, used the loss function def...

3년 초과 전 | 1

| 수락됨

답변 있음
How to extract the trained actor network from the trained agent in Matlab environment? (Reinforcement Learning Toolbox)
You can collect the actor (or policy) from the trained agent with getActor. Then, you can use the actor to predict the best acti...

거의 4년 전 | 0

| 수락됨

답변 있음
Deep Deterministic Policy Gradient Agents (DDPG at Reinforcement Learning), actor output is oscilating a few times then got stuck on the minimum.
A few points I have identified with your original script You should include the action bounds when defining action specificatio...

거의 4년 전 | 0

| 수락됨

답변 있음
Create custom policy function for a RL DQN.
Currently there I do not see any workaround to modify DQN policy directly with buit-in rlDQNAgent. A possible workaround is to r...

대략 4년 전 | 0

답변 있음
how to use GPU for actor and critic while env simulation happens on multiple cores for RL training
We are continuously improving GPU training performance with parallel computing in future releases. For now, I would recommend th...

대략 4년 전 | 0

답변 있음
rlTable using multiple element in rlFiniteSetSpec
This is a current limitation with rlTable in MATLAB R2020a. To work with multiple observation channels, you can try a neural net...

대략 4년 전 | 0

답변 있음
How can I extract a trained RL Agent's network's weights and biases?
You can get the parameters from the trained's critic representation for DQN agent. In MATLAB R2020a, see getLearnableParameters ...

대략 4년 전 | 0

| 수락됨

답변 있음
How to deploy Trained Reinforcement Learning Policy with a NN having two input layer?
As of R2020a, you can create a DQN agent with Q(s) value function. Q(s) takes observation as input and output Q(s,a) for each po...

대략 4년 전 | 0

| 수락됨

답변 있음
load multiple trained reinforcement agents into MATLAB workspace
It is not neccessary to load all 2000 agents into MATLAB (consume memory and tricky to assign unique name) to evaluate their per...

대략 4년 전 | 0

| 수락됨

답변 있음
number of look ahead steps in DDPG Agent Options
I am not sure what does reward sampling mean. "NumStepsToLookAhead" in rlDDPGAgentOptions changes the critic's target values in ...

대략 4년 전 | 1

답변 있음
how can I display the trained network weights in reinforcement learning agent?
Hi Ru SeokHun, In MATLAB R2019b and below, there is a 2-step process: Use getActor, getCriitic functions to gather the actor a...

대략 4년 전 | 1

답변 있음
How to TRAIN further a previously trained agent?
I will answer again, hopefully clear your confusion. % Train the agent trainingStats = train(agent, env, trainOpts); After th...

대략 4년 전 | 2

답변 있음
Clean up Simulink block diagram
From MATLAB R2019b, you can improve your diagram layout and appearance by opening the FORMAT tab on the toolstrip and click on A...

대략 4년 전 | 5

답변 있음
Implementing A Siamese Architecture With Matlab
You can refer to the answer in this thread https://www.mathworks.com/matlabcentral/answers/399825-how-to-construct-a-siamese-ne...

4년 초과 전 | 1

| 수락됨

답변 있음
How to construct a Siamese network using Matlab Neural Network Toolbox?
You can refer to these new examples to construct Siamese network: https://www.mathworks.com/help/deeplearning/examples/train-a-...

4년 초과 전 | 1

답변 있음
Is there a way to set specific regions on an image for OCR?
You can specify region of interest, <https://www.mathworks.com/help/vision/ref/ocr.html#bt548t1-1-roi ROI>, as the second argume...

거의 6년 전 | 0

| 수락됨

답변 있음
I want to adapt Fuzzy Logic Toolbox to be able to use the output of one system as the input of another
The current version of Fuzzy Logic Toolbox does not support internal looping of input and output variables. The simplest soluti...

대략 6년 전 | 0

답변 있음
How to provide Negative Samples to trainACFObjectDetector() when using a Ground Truth file
(3) is correct. You do not have to add negative samples because trainACFObjectDetector automatically generates negative samples ...

대략 6년 전 | 0

| 수락됨

답변 있음
The battery models in simscape are to complex. Is there a simple one?
You may want to try <https://www.mathworks.com/help/physmod/elec/ref/battery.html Simple battery model> block. You can right-cli...

대략 6년 전 | 0

답변 있음
How do i calculate the winding R & L as well as magnetizing Rm & Lm of the linear transformer block?
You do not need to calculate these values but rather set them based on your application specification. All the parameters are de...

대략 6년 전 | 0

| 수락됨

답변 있음
Train data for Semantic segmentation using existing Nets (e,g.Segnet) for different classes
The <https://www.mathworks.com/help/vision/examples/semantic-segmentation-using-deep-learning.html example> starts with training...

대략 6년 전 | 0

답변 있음
filtfilt provides excessive transient
The transients observed are due to a combination of using a marginally stable filter coupled with the initial condition matching...

대략 6년 전 | 2

답변 있음
How to count the number of objects within an area after simulink simulation ends
Yes, of course. After looking at <https://www.mathworks.com/help/simulink/examples/spiral-galaxy-formation-simulation-using-matl...

대략 6년 전 | 1

| 수락됨

답변 있음
HDL coder for Kalman filter does not simulate
Hi Reddy, Are you referring to this <https://www.mathworks.com/help/hdlcoder/examples/fixed-point-type-conversion-and-refinem...

대략 6년 전 | 0

답변 있음
How to insert a curve stemming from a measure in Simulink to use the parameter estimation?
Hi Frank, It seems that you are trying to input a vector into Simulink scope block. Simulink will treat each element of your ...

대략 6년 전 | 0

| 수락됨

답변 있음
Is it possible to toggle visibility of signals in (floating) scope during simulation?
I was not able to find information on how to toggle which input signals are shown on the scope programmatically. I will create a...

6년 초과 전 | 0

답변 있음
Is it possible to toggle visibility of signals in (floating) scope during simulation?
I tried a simple test to check if setting scope configuration in runtime is possible or not: 1. Open shipped demo 'vdp' >> v...

6년 초과 전 | 0

더 보기